Updated 5 November 2025 at 17:11 IST

Responsible AI Governance Guidelines: MeiTY Unveils 'Do No Harm' Framework

The Ministry of Electronics and Information Technology (MeitY), operating under the ambitious IndiaAI Mission, officially released the India AI Governance Guidelines today. This comprehensive framework is designed to pave the way for the safe, inclusive and responsible adoption of AI across sectors.

Follow : Google News Icon  
Responsible AI Governance Guidelines: MeiTY Unveils  'Do No Harm' Framework
Responsible AI Governance Guidelines: MeiTY Unveils 'Do No Harm' Framework | Image: Freepik
ai-icon

Show Quick Read

dropdown-arrow
Summary is AI-generated, newsroom-reviewed

The Ministry of Electronics and Information Technology (MeitY), operating under the ambitious IndiaAI Mission, officially released the India AI Governance Guidelines today. This comprehensive framework is designed to pave the way for the safe, inclusive and responsible adoption of Artificial Intelligence across every sector of the nation's economy.

The guidelines, unveiled by Principal Scientific Adviser (PSA) Prof. Ajay Kumar Sood and Secretary MeitY Shri S. Krishnan, establish a balanced structure aimed at fostering cutting-edge AI innovation while actively mitigating potential risks to individuals and society.

The Principal Scientific Advisor, Prof. Ajay Kumar Sood, highlighted the core philosophy driving the new framework. "The guiding principle that defines the spirit of the framework is simple, ‘Do No Harm’," he stated.

Prof. Sood emphasised the focus on "creating sandboxes for innovation and on ensuring risk mitigation within a flexible, adaptive system."
positioning the IndiaAI Mission to "enable this ecosystem and inspire many nations, especially across the Global South."

Advertisement

Echoing this human-centric vision, MeitY Secretary Shri S. Krishnan noted, “Our focus remains on using existing legislation wherever possible. At the heart of it all is human centricity, ensuring AI serves humanity and benefits people’s lives while addressing potential harms.”

Seven Sutras: The Core of Ethical AI Development

The framework is founded on Seven Guiding Principles, or Sutras, designed to be technology-agnostic and universally applicable. These principles form the ethical compass for all AI development and deployment in India:

Advertisement

Trust is the Foundation: Acknowledging that without inherent public trust, the adoption and innovation of AI technology cannot thrive.

People First: Prioritising human-centric design, robust human oversight and the ultimate empowerment of people.

Innovation over Restraint: Encouraging responsible innovation as the default position, only applying restraint when specific harms are clearly identified.

Fairness & Equity: Committing to inclusive development and actively working to avoid bias and discrimination in AI systems.

Accountability: Ensuring clear responsibility is allocated across the AI value chain, backed by effective enforcement of regulations.

Understandable by Design: Requiring disclosures and explanations that are transparent and comprehensible to end-users and regulatory bodies.

Safety, Resilience & Sustainability: Mandating the creation of secure, robust and environmentally sustainable AI systems capable of withstanding systemic challenges.

Six Pillars Supporting AI Governance

The comprehensive governance structure is built upon six interdependent pillars, addressing the full spectrum of challenges from resource access to regulatory oversight.

  1. Infrastructure: Expanding access to essential foundational resources like compute power and data, attracting investment and leveraging India’s Digital Public Infrastructure (DPI) for inclusive scale.
  2. Capacity Building: Launching extensive education, training and upskilling programs to build trust and increase public awareness of both the risks and the vast potential of AI.
  3. Policy & Regulation: Adopting an agile, flexible and balanced regulatory approach, which includes reviewing current laws and making targeted amendments to address specific gaps.
  4. Risk Mitigation: Developing an India-specific risk assessment framework based on empirical evidence of harm and encouraging initial compliance through voluntary industry measures.
  5. Accountability: Establishing a graded liability system where responsibility is determined based on the system’s function, the level of risk involved and the diligence shown by the developer/deployer.
  6. Institutions: Adopting a unified whole-of-government approach. This includes setting up an AI Governance Group (AIGG) for strategy, supported by a Technology & Policy Expert Committee (TPEC) and resourcing the AI Safety Institute (AISI) for technical validation and safety research.

Blueprint for the Future: A Phased Action Plan

The guidelines include a structured action plan mapped to specific timelines to ensure continuous and responsive governance:

In the short term, the focus will be on foundational steps: establishing the key governance institutions, developing India’s custom risk frameworks and securing voluntary commitments from industry leaders.

The medium-term will concentrate on implementation and scaling: suggesting necessary legal amendments, developing clear liability regimes, significantly expanding AI infrastructure access, launching broad public awareness campaigns and providing better access to AI safety tools and standards. Regulatory sandboxes will also be piloted during this phase.

The long-term goal is dynamic adaptation: reviewing and updating governance frameworks to maintain the sustainability of the digital ecosystem and drafting new laws as needed to address risks posed by rapidly evolving AI capabilities.

Also Read: IBM to Cut Jobs as It Sharpens Focus on Software and AI Business

Practical Guidelines for Ecosystem Actors

To ensure accountable and transparent AI deployment, the guidelines provide a clear roadmap for key stakeholders:

For Industry (Developers and Deployers): They are instructed to strictly comply with all existing Indian laws, adopt the new voluntary principles and standards, commit to publishing transparency reports and provide an accessible grievance redressal mechanism to manage AI-related harms. They are also urged to mitigate risks using techno-legal solutions.

For Regulators: The mandate is to support innovation while only mitigating actual, demonstrated harms. Regulators are specifically advised to avoid imposing compliance-heavy regimes, promote techno-legal approaches and ensure that all frameworks remain flexible and subject to periodic review to keep pace with technological change.

This launch marks a critical step for India as it prepares to host the India-AI Impact Summit 2026 in New Delhi, an event expected to convene global leaders to deliberate on AI's transformative role in driving People, Planet and Progress.

Published By : Tuhin Patel

Published On: 5 November 2025 at 17:11 IST