Monday, April 13, 2026

The Blueprint for India’s National AI Strategy

The Blueprint for India’s National AI Strategy

By R Kannan

As India navigates the complex economic waters of 2025, a year characterized by a resilient 6.9% GDP growth and a "cautiously optimistic" outlook, the nation faces a definitive choice. Will it remain a mere consumer of global technology, or will it emerge as a sovereign architect of the AI era? With global growth projected at a steady but below-trend 3.0% and geopolitical tensions acting as a persistent friction point, the strategic deployment of Artificial Intelligence (AI) has shifted from a peripheral innovation to a core pillar of national economic security.

The influence of AI is now pervasive, establishing itself as a powerful force in data analysis, creative processes, and consumer interactions. For India, which is projected to be the fastest-growing major economy this year, the mandate is clear: we can build an institutional and ethical framework that not only fosters innovation but also safeguards the democratic values of our 1.4 billion citizens.

Way Forward

I. Institutional Framework & Governance

Central AI Governance Group (AIGG)

The AIGG will serve as the apex body providing cohesive, whole-of-government leadership to prevent fragmented policy implementation across states. By centralizing decision-making, it ensures that AI initiatives align with India’s broader national security and economic goals. This group will act as a bridge between the Prime Minister’s Office and various technical ministries. It is designed to streamline approvals and provide a single point of accountability for the national AI mission.

AI Safety Institute (AISI)

Establishing an AISI is critical for conducting rigorous technical assessments and stress-testing AI models before public deployment. This institute will focus on identifying systemic risks, such as algorithmic biases or vulnerabilities to cyber-attacks. By setting indigenous safety benchmarks, India can ensure that foreign and domestic AI tools meet high standards of reliability. The AISI will provide the empirical data necessary to inform evidence-based regulatory decisions.

AI Ethics Advisory Board

This board will be composed of diverse experts tasked with reviewing the complex societal impacts of automation and machine learning. Its primary role is to ensure that AI development respects India’s pluralistic values and fundamental constitutional rights. By evaluating issues like surveillance, data privacy, and social equity, the board provides a human-centric layer to technical governance. It serves as a vital conscience for the state, balancing innovation with moral responsibility.

Standardized Definition of "AI Systems"

Creating a uniform definition of "AI systems" across all ministries is essential to eliminate legal ambiguity and regulatory overlap. Without a standardized lexicon, different departments might apply conflicting rules to the same technology, stifling cross-sectoral growth. A clear definition ensures that developers and investors have a predictable legal environment in which to operate. This foundational step is the prerequisite for building a sophisticated and enforceable legislative framework.

National AI Strategy with Time-Bound Objectives

A robust national strategy can move beyond vision statements into actionable, time-bound objectives for infrastructure and adoption. By setting clear milestones for 2030 and 2047, the government can provide the private sector with the certainty needed for long-term investment. This strategy will prioritize key sectors like healthcare, agriculture, and education to maximize societal benefits. Regular progress audits will ensure that the country remains agile in the face of rapid technological evolution.

Integration into Digital Public Infrastructure (DPI)

Integrating AI oversight into India’s existing DPI, like India Stack, will allow for seamless and scalable governance of digital services. This approach leverages the proven success of Aadhaar and UPI to build "AI-as-a-Service" for the common citizen. By embedding ethics and safety layers directly into the infrastructure, the government can automate compliance at the source. This ensures that AI benefits are delivered transparently and securely to the last mile of the population.

Cross-Sectoral Coordination Mechanism

A coordination mechanism is vital to maintain regulatory consistency across diverse fields such as finance, health, and transport. It prevents a "siloed" approach where one ministry’s regulations inadvertently hinder the technological progress of another. This mechanism will facilitate the sharing of best practices and technical resources between different regulatory bodies. Ultimately, it fosters a holistic ecosystem where AI can be deployed safely in multi-disciplinary environments.

Clear Jurisdictional Boundaries

Defining precise jurisdictional boundaries is necessary to avoid "regulatory turf wars" during AI-related enforcement. Clear mandates will specify which agency handles data breaches, algorithmic fraud, or physical accidents caused by autonomous systems. This clarity reduces the compliance burden on startups and ensures that victims of AI errors have a clear path to legal redress. It provides the structural backbone for a functional and fair dispute resolution system.

Multi-Stakeholder Consultative Process

This process ensures that AI policy is not developed in a vacuum but is informed by industry, academia, and civil society. Frequent consultations help policymakers stay ahead of the "pacing problem," where technology outstrips the law. By including civil society, the government can address concerns regarding job displacement and digital exclusion early in the policy cycle. This collaborative spirit builds public trust and ensures that AI solutions are inclusive by design.

AI Incident Database

The creation of a centralized AI Incident Database will allow for the systematic reporting and tracking of algorithmic failures and safety breaches. This "black box" approach for the digital world helps the industry learn from collective mistakes without stifling individual innovation. Public reporting increases transparency, while private sector data helps the AISI refine its safety testing protocols. Over time, this data will become a global resource for improving the robustness of AI worldwide.

Chief AI Officer (CAIO) in Departments

Institutionalizing the CAIO role within every major department ensures that AI is integrated strategically rather than as a mere IT upgrade. These officers will be responsible for identifying departmental use cases and ensuring that AI deployment meets national safety standards. They act as internal champions for digital transformation, bridging the gap between technical teams and policy leaders. This move ensures that the government itself becomes a sophisticated and responsible user of AI technology.

International Collaboration Channels

Establishing formal channels for global collaboration is essential for aligning India’s AI policies with emerging international norms. As AI is inherently borderless, India can participate in global bodies to help shape standards for data flow and safety. These channels will facilitate the exchange of researchers and the co-development of "AI for Good" initiatives with global partners. Through this alignment, India can position itself as a leader in the global South while maintaining interoperability with Western systems.

 

II. Infrastructure & Enabling Ecosystem

Scalable GPU/NPU Compute Infrastructure Investing in scalable GPU and NPU infrastructure is the fundamental "hardware layer" required to move India from a consumer of AI to a creator. High-performance computing power is the primary bottleneck for training advanced models, and a sovereign compute capacity ensures national data remains secure. By building domestic clusters, India can reduce its reliance on expensive, foreign cloud providers. This infrastructure will act as a utility, powering everything from weather forecasting to complex genomic research. It is the essential engine needed to drive the projected 6.9% GDP growth through technological efficiency.

National Compute Marketplace

A National Compute Marketplace will democratize access to expensive hardware by providing subsidized "compute credits" to startups and researchers. This platform will function as an exchange, allowing entities to rent or share processing power based on their project needs. By lowering the entry barrier, the government can ensure that a lack of capital does not stifle a brilliant algorithmic breakthrough. This marketplace will also incentivize efficient resource allocation, ensuring that idle government compute capacity is utilized by the private sector. It transforms a scarce national resource into an accessible public good.

National Data Repository Data is the "new oil," and a National Data Repository will provide the high-quality, non-personal datasets required to train accurate AI models. By aggregating anonymized data from transport, health, and urban planning, the government creates a goldmine for indigenous innovation. This repository will implement strict protocols to ensure privacy while maximizing the utility of public sector data. Access to such large-scale, India-specific datasets is crucial for developing AI that understands local contexts. It serves as the foundational library for the nation's digital future.

Regional AI Data Labs Establishing AI Data Labs across various regions will decentralize innovation, moving it beyond the major tech hubs like Bengaluru or Hyderabad. These labs will provide local entrepreneurs with the tools, data, and mentorship needed to solve grassroots challenges unique to their geography. By focusing on regional languages and local socio-economic problems, these centres foster a truly inclusive AI ecosystem. They will act as incubators for "AI for India," ensuring that the benefits of the 6.2% increase in consumer spending are felt across the heartland. Grassroots innovation is the key to creating the jobs needed for our growing labour force.

Indigenous LLMs and Domain-Specific Solutions Developing indigenous Large Language Models (LLMs) is a matter of strategic and cultural sovereignty for a nation as diverse as India. These models can be trained on Indian languages and cultural nuances to prevent the "western bias" inherent in existing global AI. Beyond general AI, domain-specific solutions in agriculture and medicine can revolutionize service delivery for millions. By owning the underlying technology, India ensures that its AI infrastructure cannot be switched off by external actors. This push for "Atmanirbhar AI" is central to long-term economic and technological independence.

Standardized Data Interoperability Standardizing data interoperability across public sectors is vital to prevent "data silos" that hamper government efficiency. When the health department’s data can "talk" to the social welfare department’s systems, the delivery of public services becomes seamless and proactive. These standards will define how data is formatted, shared, and secured, creating a unified digital language for the state. Interoperability ensures that AI systems can draw from multiple sources to provide holistic insights. It is the plumbing that allows the Digital Public Infrastructure to flow effectively across the nation.

Open Access for Research and Academia Implementing open access initiatives will grant academic institutions the same high-level data and compute resources as large corporations. This levels the playing field, allowing university researchers to push the boundaries of AI science without financial constraints. By bridging the gap between academia and industry, India can accelerate the commercialization of homegrown research. Open access also encourages a culture of transparency and peer review, which is essential for building trustworthy AI. It ensures that the next generation of engineers is trained on world-class infrastructure.

Multilingual AI for Inclusive Access With hundreds of languages and dialects, multilingual AI is the only way to ensure that digital services are inclusive for all 1.4 billion citizens. AI that can process speech-to-text in local dialects will bridge the literacy gap, allowing every Indian to interact with the digital economy. This technology will empower the "SHEconomy" by giving women in rural areas direct access to markets and information. Multilingual capabilities are not just a feature; they are the primary interface for India’s digital democracy. Inclusive access is the ultimate goal of our national AI strategy.

Incentivize AI-Native Enterprises The government can provide fiscal incentives, such as tax breaks and R&D grants, to encourage the growth of AI-native startups. By creating a favourable investment climate, India can attract both domestic and global venture capital into its deep-tech sector. Incentives should specifically target companies developing "AI for Social Good" to align private profit with national priorities. This will stimulate private investment, which is expected to lead the charge in India’s economic growth. A thriving startup ecosystem is essential for maintaining India's position as the fastest-growing major economy.

Enhance Digital Public Infrastructure (DPI) Enhancing our existing DPI with AI capabilities will allow for "hyper-personalized" public service delivery at a massive scale. AI-driven DPI can automate everything from tax processing to the distribution of agricultural subsidies, reducing leakages and corruption. By embedding AI into the core of our digital architecture, we can provide real-time responses to citizen needs. This evolution of the "India Stack" will serve as a global model for how a developing nation can leapfrog traditional bureaucratic hurdles. It turns the government into a high-tech platform for citizen empowerment.

Regulatory Sandboxes "Regulatory sandboxes" provide a safe, controlled environment where companies can test innovative AI applications without the immediate burden of full compliance. This allows the government to observe the technology's impact in real-time and craft evidence-based regulations. Sandboxes encourage "responsible experimentation," ensuring that safety does not come at the cost of speed. They are particularly useful for high-risk sectors like finance or healthcare, where errors have significant consequences. This agile regulatory approach is key to staying competitive in the global manufacturing and tech environment.

Dedicated Funding for Public Good AI Dedicated public funding is necessary for AI projects that may not have an immediate commercial ROI but offer immense social value. This includes AI for climate change mitigation, rare disease diagnosis, and preserving endangered indigenous languages. Public funding ensures that the direction of AI development is guided by the needs of the many, not just the profits of a few. It supports "foundational research" that creates the breakthroughs the private sector will later commercialize. By investing in the public good, the government ensures that AI acts as a tide that lifts all boats in the Indian economy.

 

III. Regulation, Risk & Compliance

Risk-Based Approach to AI

Adopting a risk-based framework allows the government to categorize AI applications into tiers like "Limited," "High," or "Prohibited." This ensures that low-risk innovations, such as spam filters, face minimal oversight, while high-stakes tools in healthcare or banking undergo rigorous scrutiny. Prohibiting "unacceptable" risks, such as social scoring, protects fundamental rights from technological overreach. This targeted strategy prevents a "one-size-fits-all" regulation that could stifle startups. It balances the need for public safety with the goal of remaining a globally competitive tech hub.

Algorithmic Transparency

Drafting transparency requirements ensures that consumer-facing apps disclose when AI is influencing a user’s choices or data feed. Users have a right to know if a recommendation engine or pricing model is utilizing their personal history to alter their digital experience. This move builds public trust and reduces the "black box" mystery often associated with proprietary algorithms. Transparency acts as a deterrent against deceptive practices and hidden biases. It empowers the Indian consumer to make informed decisions in an increasingly automated marketplace.

Mandatory Algorithmic Audits

High-risk AI systems, such as those used in law enforcement or credit scoring, can undergo mandatory third-party audits. These audits verify that the models are performing as intended and are free from critical technical flaws. By requiring periodic reviews, the government can ensure that "drift" or declining accuracy over time is identified and corrected. This institutionalizes a culture of safety and reliability within the AI development lifecycle. It provides a technical guarantee that matches the legal standards set by the state.

Accountability Frameworks

Defining clear accountability frameworks is essential to determine who is responsible when an AI system makes a flawed or harmful decision. Whether the error lies with the data scientist, the service provider, or the end-user, a legal trail can be established. This framework ensures that "the machine did it" is never an acceptable legal defence. Accountability encourages companies to invest more heavily in safety and quality control from the outset. It provides the necessary structure for insurance and legal industries to manage AI-related risks.

Liability Rules: Developers vs. Deployers

Clear liability rules distinguish between the "developer" who builds the AI and the "deployer" who uses it for a specific business purpose. For instance, if a generic LLM is used by a hospital for diagnosis, the liability for a wrong prescription can be clearly partitioned. This clarity prevents legal gridlock and protects developers from being sued for how their general-purpose tools are misused by others. It creates a fair playing field where each entity is responsible for the risks they can actually control. This legal certainty is a major driver for private sector confidence and investment.

Explainability Standards

Mandating explainability ensures that AI outputs in critical sectors can be understood and challenged by human operators. If an AI denies a loan or a medical claim, the system can be able to provide the specific reasoning behind that decision in human-readable terms. This "right to explanation" is a cornerstone of digital justice, ensuring that automated decisions are not arbitrary. Explainability helps experts debug systems and identify the root causes of biased or incorrect results. It bridges the gap between complex machine logic and the requirements of administrative law.

Bias Mitigation Benchmarks

Developing benchmarks for training datasets is vital to ensure that AI does not perpetuate historical social or gender prejudices. India’s diverse demographics require datasets that are representative of all castes, religions, and regions to avoid discriminatory outcomes. By setting technical standards for "fairness," the government provides a roadmap for developers to build more equitable tools. These benchmarks will be used by the AISI to test models before they are scaled nationally. It ensures that the "SHEconomy" and marginalized communities are not left behind by biased algorithms.

Prohibition of Unauthorized Surveillance

Strictly prohibiting unauthorized surveillance and non-consensual biometric processing is essential for protecting the privacy of 1.4 billion citizens. This regulation prevents the misuse of facial recognition or gait analysis in public spaces without a clear legal mandate. It ensures that India's digital transformation does not evolve into a "surveillance state" architecture. By setting these boundaries, the government reinforces the constitutional right to privacy. This protection is a prerequisite for maintaining public trust in the Digital Public Infrastructure.

Content Moderation for Generative AI

Establishing moderation guidelines for generative AI helps prevent the mass production of misinformation or hate speech. Developers can implement safety filters that prevent their models from generating harmful instructions or illegal content. These guidelines will hold platforms responsible for the "outputs" of their AI, encouraging them to build more robust guardrails. As generative AI becomes a primary tool for content creation, these rules protect the integrity of the information ecosystem. They ensure that AI serves as a tool for creativity, not a weapon for social discord.

Labelling for AI-Generated Content

Requiring clear labels or digital watermarks for AI-generated content is the first line of defence against deepfakes. As AI-generated audio and video become indistinguishable from reality, citizens can be alerted to what is synthesized. This labelling allows the public to verify the authenticity of political speeches, news reports, and digital evidence. It supports the efforts of the AIGG to maintain social stability during election cycles and sensitive events. Transparency in origin is key to preserving the "truth" in the digital age.

Data Protection and Privacy Standards

Enforcing rigorous data protection standards ensures that the "fuel" for AI—personal data—is handled with the highest level of security. This involves strict adherence to consent-based frameworks and data localization rules where necessary for national security. Protecting individual privacy prevents identity theft and the unauthorized profiling of Indian citizens by foreign entities. These standards are the foundation upon which the National Data Repository will be built to ensure safety. Reliable data protection is what enables the 6.2% projected growth in consumer spending to happen securely online.

Grievance Redressal Mechanism

A dedicated grievance redressal mechanism provides citizens with a formal path to contest decisions made by AI systems. Whether it is an error in an automated tax assessment or an unfair dismissal by an algorithm, people need a human-in-the-loop for appeals. This mechanism can be accessible, time-bound, and transparent to be effective for the common man. It acts as a safety net, ensuring that the human element remains supreme in a digitized government. This accountability loop is what makes the AI institutional framework truly democratic.

IV. Capacity Building & Workforce

National AI Literacy Program A nationwide literacy program is essential to demystify artificial intelligence for the general public and ensure inclusive participation in the digital economy. This initiative aims to educate citizens on how AI impacts daily life, from personalized content to digital banking, while fostering critical thinking about automated systems. By reducing the digital divide, the government empowers the growing middle class to leverage AI for better economic opportunities. Public awareness ensures that technology serves as a tool for empowerment rather than a source of exclusion. Ultimately, a literate populace is the strongest defence against misinformation and the unethical use of AI.

AI-Linked School Curriculum Integrating AI concepts into primary and secondary education ensures that the next generation is "AI-native" and ready for the future job market. The curriculum will focus on foundational logic, data ethics, and the creative use of machine learning tools, moving beyond traditional computer science. Early exposure helps students understand both the potential of AI and the importance of human-centric oversight in technology. This educational shift supports India's goal of maintaining its status as a global talent hub for the 2025-2030 decade. By investing in youth, India builds a sustainable pipeline of innovators capable of driving resilient GDP growth.

Higher Education Research Pathways Developing specialized pathways in higher education is critical to fostering world-class AI researchers and indigenous intellectual property. These programs will incentivize deep-tech research in areas like Large Language Models (LLMs) and domain-specific AI for healthcare and agriculture. By providing grants and advanced lab access, India can reverse the "brain drain" and attract top-tier academic talent back to domestic institutions. Collaboration between universities and global research bodies will ensure that Indian scholars remain at the forefront of AI breakthroughs. This focus on high-end innovation is the engine that will propel India toward its long-term economic aspirations.

Public Sector Reskilling Initiatives Implementing AI reskilling for public sector employees is vital for modernizing government service delivery and improving administrative efficiency. As AI-driven Digital Public Infrastructure (DPI) scales, civil servants can be trained to manage automated systems and interpret data-driven insights. Training will focus on "human-in-the-loop" decision-making to ensure that technology enhances, rather than replaces, public accountability. This workforce transition addresses post-pandemic weaknesses in the public sector by creating a more agile and tech-savvy bureaucracy. A skilled public workforce is the backbone of a transparent and digitally empowered state.

Law Enforcement AI Training . Training law enforcement in AI-enabled crime detection is necessary to combat the rise of sophisticated cybercrimes and deepfake-related fraud. Officers will learn to use AI for pattern recognition in financial crimes, forensic data analysis, and predictive policing within ethical boundaries. Understanding how to detect and investigate AI-generated misinformation is crucial for maintaining social stability in a volatile global environment. These capabilities allow agencies to stay ahead of bad actors who use generative AI to disrupt public order. Modernized law enforcement is a prerequisite for the high-trust environment needed for sustained economic investment.

Regulatory Technical Capacity Building technical capacity within regulatory bodies ensures that oversight is informed by the actual mechanics of the technology being governed. Regulators can understand algorithmic complexity to conduct mandatory audits and enforce transparency requirements effectively. This specialized knowledge prevents over-regulation that could stifle the growth of the fastest-growing major economy. Capacity building ensures that policies remain agile and adaptive to the rapid changes predicted for the 2025 landscape. Informed oversight is the key to balancing rapid innovation with the necessary guardrails for public safety.

National "AI Academy" An "AI Academy" will serve as a centralized hub to standardize internal training programs for all government departments and public agencies. By creating a unified pedagogical framework, the academy ensures that every "Chief AI Officer" and data scientist operates with the same high standards of ethics and technical rigor. This institution will facilitate the exchange of best practices and case studies across various sectors, from urban planning to rural development. It acts as a continuous learning centre, updating its modules as AI evolves from simple automation to complex creative processes. Standardization reduces fragmented implementation and accelerates the national AI mission.

AI-Led Talent Assessment Platforms Promoting AI-led platforms for talent assessment helps the workforce find the right roles in an economy shifting toward automation. These platforms can identify skill gaps in real-time and recommend personalized learning paths for workers affected by technological disruption. By using unbiased algorithms for job matching, India can optimize its labour force participation, particularly within the growing "SHEconomy". Efficient talent allocation is essential for addressing the challenge of creating enough jobs for a growing population. This data-driven approach to human resources ensures that the labour market remains resilient amidst global economic challenges.

Industry-Academia Partnerships Encouraging formal partnerships between industry and academia ensures that educational outcomes are aligned with the actual needs of the AI-native enterprise sector. Companies can provide real-world datasets and "compute" resources for university projects, while academia offers the deep research needed for commercial breakthroughs. This collaboration accelerates the cycle of innovation, allowing India to lead in manufacturing and service sector AI applications. Joint ventures can also focus on vocational training, creating a workforce ready for immediate employment in high-growth sectors. Such synergy is vital for maintaining the 6.9% GDP growth trajectory projected for 2025.

Public Awareness Campaigns Public-facing campaigns are necessary to educate the masses about the dual nature of AI—its immense benefits and its inherent risks like privacy loss and algorithmic bias. These campaigns will promote a culture of "digital hygiene," teaching citizens how to identify AI-generated content and protect their personal data. By being transparent about how the government uses AI, these initiatives build the public trust required for large-scale digital transformation. Informed citizens are better equipped to navigate a world where AI is pervasive in consumer interactions and creative processes. Transparency is the foundation of a stable and ethical digital society.

Vocational Training in AI-Ready Skills Supporting vocational training in AI-ready skills, such as data annotation and model curation, creates immediate job opportunities for the youth in rural and semi-urban areas. These "middle-skill" roles are the backbone of the global AI supply chain, providing the labelled data needed for high-quality training sets. This initiative helps diversify the economy and provides a safety net for those who may be displaced by automation in traditional sectors. By focusing on these practical skills, India can become the world’s "data back-office," supporting global AI development while boosting domestic incomes. Vocational training ensures that the benefits of the AI revolution are distributed across all levels of the workforce.

AI "Translators" Developing a cadre of AI "translators" is essential to bridge the gap between technical teams and business or policy leaders. These professionals possess both the technical understanding of machine learning and the strategic insight to apply it to real-world economic problems. Translators ensure that AI projects are not just "science experiments" but are designed to deliver clear societal or commercial value. They play a crucial role in explaining complex AI decisions to stakeholders, ensuring accountability and transparency in governance. As AI becomes more pervasive, these bridge-builders will be the key to successful and ethical technology adoption.

V. Ethics, Trust & Sustainability

National AI Ethics Manifesto Drafting a National AI Ethics Manifesto will serve as the moral compass for India’s digital journey, ensuring that technology aligns with constitutional values and the diverse social fabric of the nation. This document will articulate the fundamental principles of fairness, dignity, and transparency that can be embedded in every algorithm. By setting a high standard for moral accountability, it helps prevent the "black box" phenomenon where machine logic overrides human rights. The manifesto will be a public commitment that builds trust between the state and its 1.4 billion citizens. It provides a shared ethical vocabulary for developers, policymakers, and the public alike.

Human-Centred Design Prioritizing human-centred design in all government-led AI projects ensures that technology is built around the needs and limitations of citizens, rather than forcing people to adapt to complex machines. This approach is vital for the 2025 landscape where AI becomes pervasive in consumer interactions and service delivery. By focusing on empathy and usability, the government can bridge the digital divide and ensure that rural populations are not alienated by automation. Human-centred systems prioritize safety and accessibility, making the state's digital transformation more inclusive and effective. Ultimately, it ensures that AI remains a tool for human empowerment, supporting the resilient 6.9% GDP growth target.

Fairness and Non-Discrimination Enforcing strict fairness and non-discrimination standards is essential to prevent AI from magnifying existing social prejudices or creating new forms of digital exclusion. Public AI services can be tested against benchmarks that ensure equitable outcomes for all castes, religions, and genders, protecting the "SHEconomy" and marginalized groups. These standards will act as a legal safeguard against biased training data that could lead to unfair denials of services or benefits. By institutionalizing fairness, India ensures that its Digital Public Infrastructure remains a democratic asset rather than a tool for profiling. Trust in public systems is the foundation of a stable and thriving economy.

Socio-Economic Impact Assessments Conducting Socio-Economic Impact Assessments for major AI deployments will allow the government to anticipate and mitigate the risks of automation before they manifest. These evaluations will analyse how a new AI system might affect labour force participation rates or disrupt household balance sheets. By identifying potential "losers" in the technological transition, the state can proactively design support systems or reskilling programs. This forward-looking approach addresses the challenge of creating enough jobs for India's growing labour force. It ensures that the transition to an AI-driven economy is both stable and socially just.

Environmental Sustainability Benchmarks Implementing environmental sustainability benchmarks for AI compute usage is critical as the nation invests in scalable GPU/NPU infrastructure to power its growth. AI training and data centres are energy-intensive; therefore, India can align its compute needs with its broader commitment to decarbonization and renewable energy. By mandating energy-efficient hardware and carbon-neutral operations, the government can minimize the environmental footprint of its digital ambition. These benchmarks will encourage the private sector to adopt "green AI" practices as part of their corporate responsibility. Sustainable growth ensures that today's technological gains do not come at the expense of future generations.

Accessibility Guidelines Ensuring comprehensive accessibility guidelines allows marginalized groups and people with disabilities to interact seamlessly with AI-driven public services. This involves building multilingual capabilities and speech-to-text interfaces that cater to those with varying levels of literacy or physical impairments. By removing digital barriers, the government fulfills its promise of inclusive growth and protects the rights of every citizen to access state benefits. Accessibility is not just a technical feature but a requirement for a truly democratic Digital Public Infrastructure. It ensures that the benefits of the fastest-growing major economy reach the very last mile.

"Human-in-the-Loop" Protocols Establishing "human-in-the-loop" protocols for high-stakes decisions ensures that no life-altering choice—such as a medical diagnosis or a legal ruling—is made by an algorithm without human oversight. This safeguard maintains accountability and allows for human nuance and empathy to override machine logic when necessary. These protocols are especially critical in navigating the "cautiously optimistic" but uncertain economic environment of 2025. They provide a necessary safety net against algorithmic errors and systemic glitches. Keeping humans in the loop preserves the principle that technology should assist, not replace, human judgment and responsibility.

Privacy-by-Design Mandating privacy-by-design for all AI software procurement ensures that data protection is baked into the product from the initial concept phase, rather than added as an afterthought. This standard protects the sensitive personal data of citizens as they interact with increasingly pervasive AI systems. By requiring developers to use anonymization and encryption as default settings, the government minimizes the risk of mass surveillance or data breaches. This approach aligns with the demand for authentic, transparent, and ethical digital interactions. Privacy-by-design is the bedrock of public trust in a data-driven economy.

Intellectual Property (IP) Rights Protecting intellectual property rights while enabling large-scale training is a delicate balance that is vital for sustaining private investment in AI-native enterprises. The government can create frameworks that allow researchers to use high-quality datasets without violating the rights of original content creators. This ensures a healthy ecosystem where both the "trainers" and the "creators" are incentivized to innovate. Clear IP rules prevent legal disputes that could stall technological progress and dampen investor optimism. Balancing these interests is key to India's ambition of leading in the global manufacturing and creative environment.

Job Displacement Monitoring . Monitoring and mitigating job displacement risks in vulnerable sectors is a priority as India navigates a highly competitive global environment. While AI creates new opportunities, it also threatens traditional roles, particularly in sectors with weak growth like agriculture. The government can stay vigilant and adapt its labour policies to support those whose livelihoods are impacted by automation. This involves tracking employment trends in real-time to intervene with targeted support or vocational training. Proactive labour market management is essential to maintain social stability and sustain the promising 2025 economic outlook.

"Positive Human Values" for Alignment

Defining a set of "positive human values" will guide the alignment of AI systems, ensuring they act in ways that are beneficial to Indian society. This alignment process involves training models to prioritize safety, honesty, and helpfulness while respecting local cultural sensitivities. By steering AI toward these goals, the government can prevent the emergence of harmful or antisocial machine behaviours. These values will serve as the technical and ethical foundation for the National AI Strategy. Value-aligned AI is more likely to be accepted by the public and integrated successfully into the national infrastructure.

Open-Source AI Development Promoting open-source AI development where appropriate fosters transparency and allows for collective scrutiny of the algorithms that govern public life. Open-source models can be audited by independent researchers to identify biases or security flaws that might remain hidden in proprietary software. This collaborative approach encourages "frugal innovation," allowing startups to build upon existing foundations rather than reinventing the wheel. It aligns with India's successful history of building open-standard Digital Public Infrastructure like UPI. Transparency through open source is a powerful tool for building a trustworthy and resilient AI ecosystem.

VI. Monitoring & Adaptive Governance

This is the most important aspect and the details are as follows.

Establish horizon-scanning exercises for emerging AI trends

  • Horizon-scanning will identify disruptive AI trends early, ensuring India’s strategy remains relevant in a rapidly evolving technological landscape.
  • These exercises allow policymakers to anticipate shifts in creative processes and data analysis before they impact the broader economy.
  • By monitoring global advancements, India can adapt its infrastructure to support emerging innovations like advanced Large Language Models.
  • This proactive approach helps mitigate risks from trade disruptions or geopolitical tensions that often accompany new tech frontiers.
  • It ensures that the national AI mission stays aligned with the "cautiously optimistic" growth projections for the 2025-2030 period.

Require quarterly reporting to the Board or Legislature on AI progress

  • Mandatory quarterly reports will provide the transparency needed to track the implementation of AI across various government departments.
  • These updates ensure that AI initiatives are contributing effectively to the projected 6.9% GDP growth and resilient economic outlook.
  • Legislative oversight helps verify that public spending on AI infrastructure is delivering the intended service sector improvements.
  • Regular reporting creates a record of how AI is being used to manage inflationary pressures and support monetary policy.
  • It holds the government accountable for the ethical and safe deployment of technologies in a world increasingly driven by AI.

Implement anomalous behaviour detection in model usage logs

  • Anomalous behaviour detection acts as a continuous digital audit, identifying potential safety breaches or algorithmic "drift" in real-time.
  • This technical safeguard is essential for maintaining the integrity of Digital Public Infrastructure as AI becomes more pervasive.
  • Monitoring usage logs helps detect unauthorized access or non-consensual data processing that could threaten citizen privacy.
  • By identifying patterns of misuse, the government can proactively refine its security protocols for the high-risk AI tech stack.
  • This layer of monitoring builds the public trust required for sustained consumer spending and digital participation.

Enforce Service Level Agreements (SLA) for AI uptime and safety

  • SLAs ensure that AI-driven public services remain reliable and safe for the millions of citizens relying on them for daily interactions.
  • These agreements mandate strict performance benchmarks, preventing service disruptions that could impact economic productivity.
  • For high-stakes sectors like healthcare or finance, safety-focused SLAs ensure that AI outputs meet rigorous quality standards.
  • Enforceable contracts help manage the highly competitive global environment by ensuring domestic AI tools are world-class.
  • Reliable uptime is critical for supporting the ongoing infrastructure investments driving India's 2025 growth.

Conduct regular vulnerability scanning on the government AI tech stack

  • Regular scanning identifies security weaknesses in the AI hardware and software layers before they can be exploited by bad actors.
  • This practice protects sensitive national data repositories from the cyber risks associated with increased geopolitical tensions.
  • Vulnerability assessments ensure that the GPU and NPU compute infrastructure remains resilient against systemic technical failures.
  • Protecting the tech stack is a prerequisite for maintaining the stable banking sector that supports private investment.
  • It provides a technical guarantee of safety, aligning with the demand for authentic and transparent digital governance.

Establish "kill switches" for autonomous systems posing systemic risks

  • "Kill switches" provide a final fail-safe to immediately deactivate autonomous systems that exhibit uncontrollable or harmful behaviour.
  • This protocol is a vital component of a risk-based approach, protecting the nation from unforeseen systemic failures.
  • Having a manual override ensures that human authority remains supreme in high-stakes decisions affecting the economy or public safety.
  • It acts as a deterrent against the deployment of untested or highly volatile AI models in critical infrastructure.
  • This safety measure addresses public concerns about AI risks, supporting a balanced approach to technological adoption.

Maintain an active inventory of AI systems currently deployed

  • A centralized inventory provides a clear map of every AI tool used within the public sector, preventing redundant or overlapping projects.
  • This database allows for efficient lifecycle management, from initial deployment to the eventual decommissioning of aging systems.
  • Tracking active systems is essential for conducting accurate socio-economic impact assessments and algorithmic audits.
  • It ensures that policymakers have a holistic view of how AI is integrated into the nation’s service-led growth.
  • An inventory facilitates transparency, allowing the public to see where and how their data is being utilized by the state.

Provide "regulatory agility" by reviewing policies every 6–12 months

  • Frequent policy reviews allow the government to keep pace with the rapid technological advancements expected through 2025.
  • Regulatory agility ensures that rules do not become obsolete as AI further establishes itself in creative and data processes.
  • It allows the government to respond quickly to new economic challenges, such as shifts in global trade or consumer spending patterns.
  • This adaptive approach helps India stay competitive by removing bureaucratic hurdles for emerging AI-native startups.
  • Agile governance is the key to balancing necessary safety guardrails with the goal of rapid technological innovation.

Empower whistleblower protections for AI safety concerns

  • Strengthening whistleblower protections encourages employees to report unethical practices or safety flaws without fear of retaliation.
  • This internal accountability mechanism is essential for identifying hidden biases or "black box" risks in proprietary AI models.
  • Protections ensure that ethical concerns regarding data privacy or non-consensual processing are addressed at the source.
  • By listening to internal experts, the government can improve the robustness and transparency of its AI governance framework.
  • It fosters a culture of responsibility within the tech sector, aligning private innovation with the public good.

Create public feedback loops for ongoing policy refinement

  • Public feedback loops allow citizens and businesses to share their real-world experiences with AI-driven government services.
  • This direct input helps policymakers identify "friction points" in the Digital Public Infrastructure and refine it for better inclusion.
  • Open dialogue builds the trust necessary for the "SHEconomy" and marginalized groups to participate fully in the digital age.
  • Feedback loops ensure that AI governance remains a democratic process, reflecting the values of the 1.4 billion people it serves.
  • They provide the empirical data needed to adapt to changing market conditions and consumer expectations.

Utilize AI for internal audit and government efficiency

  • Deploying AI for internal audits can significantly reduce administrative leakages and improve the efficiency of public sector operations.
  • AI tools can monitor government spending in real-time, ensuring that infrastructure investments are utilized as intended.
  • Automated audits help manage inflationary pressures by identifying and correcting wasteful practices within the bureaucracy.
  • Utilizing AI internally demonstrates the government's commitment to becoming a sophisticated and responsible tech user.
  • Efficiency gains from AI support the resilient growth trajectory needed for India to remain the fastest-growing major economy.

Publicly disclose AI governance performance metrics annually

  • Annual disclosure of performance metrics provides a transparent report card on how well the national AI mission is meeting its targets.
  • These metrics allow the public to evaluate the effectiveness of the AI Ethics Manifesto and safety protocols.
  • Transparency in governance performance builds investor confidence and supports sustained private investment in the tech sector.
  • It demonstrates a commitment to ethical marketing and the transparent use of data in a world driven by AI.
  • Annual reports ensure that the state remains vigilant and accountable for the long-term impacts of its digital strategy.

In summary, effective AI governance is not a static set of rules but a dynamic, multi-dimensional undertaking that requires persistent coordination between policymakers, industry stakeholders, and civil society. The transition from abstract ethical principles to concrete implementation—through risk-based regulation, robust infrastructure, and continuous capacity building—is the defining challenge for 21st-century leadership. By fostering a pro-innovation environment that simultaneously prioritizes transparency, accountability, and safety, nations can harness AI as a force multiplier for inclusive growth. The future belongs to those who view AI governance not as a barrier to development, but as a critical infrastructure for building public trust and resilience. Ultimately, a successful national strategy ensures that AI serves as a partner in empowering human capabilities, securing a smarter, safer, and more prosperous future for all citizens.

Building a robust national AI framework in 2026 requires moving beyond basic policy to creating an "AI-native" state. This involves treating AI infrastructure—compute, data, and talent—as foundational Digital Public Infrastructure (DPI), similar to roads or telecommunications.