Sunday, October 12, 2025
HomeTechnologyIndia’s Answer to AI Safety: IIT Engineer Takes on Tech Giants’ Oversight...

India’s Answer to AI Safety: IIT Engineer Takes on Tech Giants’ Oversight Gap.

New Delhi  — While the world’s largest technology companies deploy artificial intelligence at unprecedented speed, a critical question has emerged: who is ensuring these systems are safe? Two IIT Kharagpur graduates believe India can provide the answer.

Sumit Verma, who earned his Computer Science and Engineering degree from IIT Kharagpur after overcoming early hardship following his father’s death, spent over four years working with major Japanese tech firms Mercari and Rakuten. There, he witnessed a troubling pattern: AI systems rolling out faster than the safeguards designed to protect users.

“The gap is real,” Verma says. “Companies are racing to deploy AI, but the frameworks to evaluate fairness, safety, and harmful outputs are lagging far behind. Someone has to build those guardrails.”

After returning to India in May 2024, Verma took on the role of Head of AI Engineering at Opensense Labs, where he led AI development initiatives. But a bigger vision was taking shape. In June 2025, he co-founded Responsible AI Labs with Pritam Prasun, CEO of Opensense Labs and fellow IIT Kharagpur alumnus, launching a startup dedicated to auditing and monitoring AI systems.

The timing proved fortuitous—or perhaps ironic. Shortly after founding RAIL, Verma received interview invitations from Meta’s London office and Google India, two of the very tech giants whose AI oversight he aims to strengthen.

He turned them both down.

“These companies need external accountability,” Verma explains. “I can’t build that from the inside.”

What drives Verma’s conviction runs deeper than technical curiosity. “I feel terrible when I read about people committing suicide, crimes, or misusing AI in harmful ways,” he says. “We can’t control end users, but we absolutely have control over what we deliver to them. We must monitor and restrict AI systems that can cause such harm.”

Responsible AI Labs’ flagship product, the RAIL Frameworks, measures AI outputs for bias, toxicity, and safety violations—catching problems that can lead to discriminatory hiring decisions, financial losses, or worse. The startup, built by two IIT Kharagpur graduates, represents India’s growing expertise in AI governance, a field where the country is positioning itself as a global voice.

Verma’s decision also marks a reversal of a decades-old trend. Rather than joining India’s brain drain to Silicon Valley or remaining in Japan, he chose to build his company at home—even when international opportunities came calling.

“India has the talent, the technical depth, and the perspective to lead in responsible AI,” Verma says. “We’ve seen what happens when technology moves without accountability. This is our chance to do it differently.”

As AI reshapes everything from healthcare to employment across Asia, the IIT Kharagpur duo’s bet is that the region’s next great export won’t just be technology—it will be the wisdom to use it responsibly.

For these engineers, that mission is worth more than any Big Tech paycheck.

FOR MORE INFORMATION
https://responsibleailabs.ai/

RELATED ARTICLES

Most Popular