

advertisement
When the Ministry of Electronics and IT (MeitY) of the Union Government released the India Artificial Intelligence (AI) Governance Guidelines on 5 November, 2025, under the IndiaAI Mission, it presented a ‘third path’ between Europe’s regulatory rigour and America’s fragmented oversight. This participatory approach offers genuine innovation of AI governance, though its effectiveness will depend significantly on implementation.
While Europe imposes rigid compliance requirements through its AI Act and the US pursues fragmented sectoral oversight, India has chosen participatory governance over prescriptive regulation—a model that could prove transformative for developing economies worldwide.
The stakes couldn't be higher. India's AI market is projected to reach $5.10 billion in 2025, with an expected annual growth rate of 43.76 percent, potentially reaching $45 billion by 2031. This explosive growth is occurring in an economy that simultaneously hosts cutting-edge technology hubs and vast rural populations, making India's governance approach particularly consequential for billions of people in similar emerging economies.
At the heart of India's framework lie seven guiding principles—termed "Sutras" in homage to India's philosophical traditions—that establish ethical foundations without becoming legislative straitjackets.
These principles—trust, people first, innovation over restraint, fairness & equity, accountability, understandable by design, and safety, resilience & sustainability—function as normative standards rather than legally binding requirements.
This distinction is crucial. By positioning these principles as aspirational guideposts rather than compliance checkboxes, India has created space for contextual interpretation across diverse sectors and use cases. The framework's six governance pillars—technology & standards, capacity building, data & compute, safety & trust, policy & regulation, and ethics & rights—provide structural support while maintaining flexibility for sectoral customisation.
The European Union's (EU) AI Act is estimated to cost the European economy €31 billion over five years and reduce AI investments by almost 20 percent, with small and medium enterprises facing compliance costs of up to €400,000 for a single high-risk AI system. Annual compliance costs for each AI model classified as high-risk under the EU framework are estimated at approximately €52,000.
These costs would be devastating for India's diverse AI ecosystem, where adoption spans from large technology companies building foundation models to small enterprises deploying AI tools for agricultural optimisation. A regulatory approach imposing uniform compliance costs would inevitably favour large players with dedicated legal teams, potentially shutting smaller innovators out entirely.
What distinguishes India's approach most clearly is its embrace of participatory governance through institutional mechanisms designed for continuous stakeholder engagement. The participatory architecture through the AI Governance Group (AIGG), supported by the Technology and Policy Expert Committee (TPEC), and the AI Safety Institute (AISI), creates a consultative architecture tasked not with enforcing rigid rules but with facilitating dialogue and coordinating policy responses across government, industry, academia, and civil society.
The effectiveness of this consultative approach will depend on whether these institutions develop sufficient authority and resources to translate dialogue into coordinated action.
The concept of "distributed responsibility" across the AI value chain acknowledges the complexity of modern AI systems. However, implementing this approach will require clear guidance on roles and responsibilities at each stage—from data providers to model developers to deployers.
The emphasis on "Innovation over Restraint" as a guiding principle explicitly signals that innovation is the default position, with restraint applied only when empirically demonstrated risks warrant intervention.
The government's decision to work through sectoral regulators rather than creating an omnibus AI regulatory body reflects a sophisticated understanding of domain-specific challenges. Financial AI applications face different risks than healthcare AI or autonomous vehicles.
By allowing specialised regulators like the Reserve Bank of India to manage AI risks within their domains while MeitY provides overarching philosophical direction, the framework achieves both coherence and specificity.
India's Digital Public Infrastructure experience offers valuable lessons for AI governance. Platforms like UPI demonstrate how state-backed infrastructure can establish foundational protocols while enabling innovation.
The "tight-loose-tight" model—establishing clear standards and accountability while allowing flexibility in application development—provides a practical framework. Learning from both the successes and challenges of earlier DPI initiatives, including addressing access concerns and privacy protections, will strengthen AI governance implementation.
Some implementation challenges warrant attention. Operationalising principles like "Fairness & Equity" and "Understandable by Design" will require developing sector-specific standards and assessment methodologies. The framework would benefit from clearer guidance on transparency requirements, including provisions for algorithm documentation, impact assessments, and disclosure appropriate to different risk levels and use cases.
The framework's flexibility is both a strength and a potential limitation. Principles-based approaches enable contextual adaptation but require robust implementation mechanisms to ensure meaningful compliance. Developing clear procedures for principle interpretation, establishing accountability for violations, and creating accessible redress mechanisms will be crucial for translating aspirational standards into effective safeguards.
As India prepares to host the AI Impact Summit in February 2026, it presents an alternative governance model to the global community. The framework's success will be measured not just by economic metrics but by its ability to foster inclusive AI development, protect against algorithmic harms, and maintain public trust. Early implementation efforts should focus on operationalising core principles, building regulatory capacity, and establishing transparent accountability mechanisms.
Global AI governance discourse has been dominated by frameworks from advanced economies, often reflecting their specific capabilities and priorities. India's approach, developed for an economy with both cutting-edge innovation and developmental challenges, may offer insights for the many nations in similar circumstances. The framework deserves both fair evaluation and constructive critique as implementation progresses.
For nations across the Global South facing similar development imperatives, India's approach demonstrates that AI governance need not be a binary choice between innovation and restraint.
India's participatory governance model offers a valuable middle path for nations grappling with how to harness AI's benefits while mitigating its risks. By prioritising flexibility over rigidity, consultation over compulsion, and distributed responsibility over centralized control, India has created a framework that acknowledges a fundamental truth: effective AI governance in diverse, developing economies requires not prescriptive rules but participatory processes that evolve alongside technological capabilities and societal needs.
(Subimal Bhattacharjee is a Visiting Fellow at Ostrom Workshop, Indiana University Bloomington, USA, and a cybersecurity specialist. This is an opinion piece. The views expressed above are the author’s own. The Quint neither endorses nor is responsible for them.)
Published: undefined