India must build AI at home before conquering the world: Inmobi’s Mohit Saxena

by Incbusiness Team

India is at an inflection point in the global race for AI (artificial intelligence) supremacy. The United States and China have spent decades building deep technological capabilities which now anchor their positions in AI. India is often spoken of as an emerging force, yet the gap between aspiration and actual readiness remains wide.

Mohit Saxena, Co-founder and CTO, Inmobi, captured this divide at TechSparks 2025.

Addressing a room packed with entrepreneurs and tech enthusiasts, he said, “There is a very stark difference in how that part of the world is dealing with AI and what we are doing.”

While many see India as a “third emerging superpower”, the country must first recognise the scale of the gap. The US has spent 50 years and “trillions” building its lead, while China has “replicated the entire system within decades,” he said.

He also noted that the US continues to dominate through massive investment and research-heavy universities, while China has advanced just as rapidly through coordinated state industry efforts and a fast scaling domestic ecosystem.

India, meanwhile, has world-class talent but not the same institutional depth, he argued. Understanding this gap is essential before attempting to bridge it. Without the ability to develop original IP, optimise foundational technologies, and conduct deep research, India risks becoming a large-scale user of AI systems rather than a creator of them, he added.

Foundations of AI engineering

A central message in the InMobi tech chief’s talk is the need to distinguish between using AI and building AI. Much of the excitement around large language models (LLMs) has created the impression that adopting an API is equivalent to becoming an AI company. Mohit disagrees with this.

“Ninety nine percent of people think AI is just about an LLM that writes essays or edits email. Half the people I meet who say they are an AI company are not AI companies at all. They are simply users. There is no value and no IP. Someone else will come and do it better.”

To explain what real AI work involves, Saxena broke it down to four interconnected foundations.

The first is effective pre-processing and token optimisation which reduces the cost of interacting with large models. Simple queries can waste thousands of tokens unless the system learns to identify what is relevant. Such optimisation may sound minor but it makes the difference between an experiment and a viable product.

The second foundation is an informed choice of models. Companies today have access to a wide range of systems such as Gemini, OpenAI, Claude and numerous open source models that can run cheaply on local hardware.

“We spend a lot of time deciding which model to use,” said Saxena, noting that Glance AI often relies on “seven to nine models” because cost, accuracy and efficiency vary considerably. To manage this complexity, the company uses a single API that selects the best model at runtime. He added that open source models offer a strong path for India.

“We create our own version, train it on our data, and then it becomes our model” he said.

Saxena argued that many applications do not need large language models at all and that “a small model of one billion parameters or even less works very well for most use cases.”

The third foundation is hardware efficiency. This is where India is particularly weak.

“I have not met an engineer who knows how to use an NVIDIA driver … We are sending our engineers back to college to learn GPUs. If we do not know GPUs, we cannot succeed in AI. The problem is not that companies need more GPUs. It is that they do not know how to use the GPUs they already have,” he said, adding that most companies run at 7% or 8% utilisation.

He stressed that GPU programming demands a mindset closer to systems engineering than to modern high level software development.

The final foundation is output quality. Modern models are non-deterministic and can hallucinate or provide inaccurate results. Post processing systems are essential to filter errors and ensure that the user receives reliable information. For any company trying to provide a real service, this layer becomes its own area of engineering and experimentation.

AI product and IP strategy

Once these foundations are understood, the question is how India should approach building products. Saxena is of the view that India must resist the temptation of quick wins in the form of SaaS products aimed at the US. These models can bring short-term revenue but are vulnerable to displacement by deeper AI players with stronger infrastructure and larger training pipelines.

Instead, he suggests a more sustainable route that mirrors the Chinese playbook. China built Baidu, ByteDance and everything else. They built for themselves first and then scaled globally.

Build for domestic consumption first, he said, adding that India’s scale is large enough to generate meaningful data, refine models, and test products under diverse conditions.

“We have so much consumption and so many problems to solve in this country. Solve it here. If you solve it here, it becomes very easy to take the product outside,” Saxena noted.

He used the example of Glance and other InMobi products which were built in India but deployed across Asia, Japan and the US. Importantly, the model families behind them adapt to local faces, behaviour, and cultural variations. This shows that a product born in India can compete globally if it is rooted in strong technical foundations and flexible enough to learn from different kinds of data.

Such an approach also positions India to build valuable IP. When companies develop small custom models or multi model systems that run efficiently on constrained hardware, they create innovations that are difficult to replicate, he said.

Research and technical depth

Emphasising the importance of research, Saxena highlighted that Indian students and engineers often view education purely as a path to employment. This leads to frequent job changes and a lack of long-term specialisation.

“Nobody does research in India. Education is a means to a job. People work for two years, switch companies, do this five times and their career is done,” he said.

He contrasted this with researchers in the US who may spend a decade on a single area of AI before contributing to industry. Such people are rare in India, and companies feel the absence when trying to build serious AI systems.

Saxena stressed the importance of mathematics, physics, and systems level thinking as they form the foundation of real AI research.

“If you are working in AI without depth, we are not going to make it. That is the reality.”

new sponsor gif

Edited by Swetha Kannan

Original Article
(Disclaimer – This post is auto-fetched from publicly available RSS feeds. Original source: Yourstory. All rights belong to the respective publisher.)


Related Posts

Leave a Comment