India’s AI Governance Guidelines: A Blueprint for Responsible AI Development

AI specialist reviews data charts, working in tech industry

As part of the Scribere program offered by TechReg Bridge, Scribere Diya and Scribere Samyukta under the guidance of Anupam Sanghi, analyzed the recently released Report on AI Governance Guidelines (Report) released by the Ministry of Electronics and Information Technology (MeitY) on January 6th, 2025. 

The Report comes at a time where several cases have cropped up highlighting the growing concerns around Artificial Intelligence’s (AI) influence. News agencies like ANI have filed lawsuits against OpenAI, alleging unauthorized use of their content for AI training. Similar cases have been filed by Indian publishing houses, including Rupa Publications and Cambridge University Press, to protect intellectual property rights. 

Interestingly (or alarmingly), AI is emerging as a threat to the personality rights of individuals with actor Anil Kapoor and singer Arijit Singh seeking protection against AI-generated impersonations.

In this bulletin, we understand the need for AI regulations, the course charted for the Indian AI and data privacy experience and whether such a course is really the viable solution.

The Need for AI Regulation 

AI in Critical Sectors

AI is transforming India’s education, legal, and social media sectors. In education, AI-driven tools like robots, VR lessons, and gamified activities boost accessibility, engagement, and personalization. AI also aids students with disabilities through speech-to-text apps and AI-based screen readers. In law, AI enhances efficiency in contract review, legal research, and document automation, with firms like Cyril Amarchand Mangaldas using tools like Harvey and ChatGPT. However, automation raises concerns about job prospects for fresh graduates. We tackled this exact dilemma in January’s TB Quest, focusing on the implications of AI for GenZ, especially in terms of employability and salary conversations. AI personalizes content, moderates harmful material, and improves user safety, with platforms like Instagram, Twitter, and Facebook leveraging it for a better digital experience.

AI and Misinformation

AI seems to have accelerated disinformation by enabling the creation and rapid spread of fake texts, images, audio, and videos. While AI-driven algorithms enhance user engagement, they may also amplify false content, raising ethical concerns about democracy, human rights, and peace. Disinformation erodes trust, distorts decision-making, and fuels conflicts. Additionally, AI-powered recommendation systems on platforms like Facebook, Twitter, and YouTube personalize content, shaping individual realities and influencing public perception. As AI both generates and disseminates disinformation, it poses serious challenges to societal values and individual rights.

AI’s Role in the Indian Economy 

AI is reshaping India’s economy, driving industry growth and innovation. With the wave of regulatory action in the digital markets, global tech giants like Google and Microsoft are watching policies that may impact their strategies. The IndiaAI Mission aims to democratize computing, enhance data quality, develop local expertise, and promote ethical AI. Other initiatives, like the PM-STIAC AI mission, focus on AI-driven solutions in healthcare, education, agriculture, and smart cities. Businesses, including Infosys and Tata Technologies, are leveraging AI for efficiency and security, while Tata Communications integrates AI into its cloud services. However, AI adoption must address data privacy, ethics, and workforce displacement.

Legal Precedents and AI-Related Cases

ANI has sued OpenAI for allegedly using its content without permission to train ChatGPT, arguing unfair competition and copyright infringement. OpenAI claims its use of publicly available data falls under fair use and has challenged the Delhi High Court’s jurisdiction. Meanwhile, the Federation of Indian Publishers and media outlets like NDTV and Network18 have raised similar concerns about content scraping.

In separate cases, Anil Kapoor and Arijit Singh have successfully defended their personality rights against unauthorized AI-generated use of their identity, voice, and likeness. These rulings underscore the growing need to safeguard personality rights in the digital age. 

Ethical and Privacy Concerns

As businesses adopt AI to boost productivity, privacy and compliance challenges remain critical. Data, although often collected with consent, can be misused beyond its intended purpose. Malicious actors exploit AI vulnerabilities through techniques like prompt injection attacks, tricking models into revealing sensitive information. The rise of deepfake technology further escalates risks, enabling hyper-realistic media that fuels misinformation, phishing scams, and identity theft. These threats highlight the urgent need for robust AI security measures.

Foundation of AI Governance in India

Recognizing Artificial Intelligence’s (AI) transformative power and potential risks, India has embarked on a mission to establish a comprehensive AI governance structure. With the launch of the IndiaAI Mission on March 7, 2024, the Indian government took a significant step toward building a structured AI ecosystem. A multi-stakeholder Advisory Group has been constituted under the chairmanship of the Principal Scientific Advisor (PSA) of India to undertake development of an ‘AI for India-Specific Regulatory Framework.’

MeitY further set up a subcommittee on AI Governance and Guidelines Development, which happens to be the body releasing this Report. As its first exercise, the subcommittee advocates for a principle-based approach, broadly laying out the ‘AI Governance Principles’ that it seeks to use as the foundation for upcoming regulation. These included the usual suspects – Transparency & Accountability, Safety, Reliability & Robustness, Fairness & Non-Discrimination, Human-Centered Values, and Privacy & Security. 

Key Approaches to AI Governance

Keeping the principles in mind, the subcommittee suggested three key approaches to operationalising AI governance – 

  • Lifecycle Approach:  AI systems evolve through multiple stages – development, deployment, and monitoring – and risks vary across these stages. A lifecycle approach ensures that regulations account for risks at every stage of this cycle.
  • Ecosystem-driven Governance: Instead of regulating AI in isolation, India seeks to adopt an ecosystem-wide approach that considers all stakeholders involved in AI’s development and use. These stakeholders include data principals, providers, AI developers, deployers, and end users. The aim seems to be to distribute accountability and ensure that AI deployment is responsible and well-regulated, by acknowledging the roles and responsibilities of each stakeholder.
  • Techno-legal Strategy: The subcommittee has proposed a flexible ‘techno-legal’ approach to replace the usual, rigid ‘command and control’ approach. What is hoped for is regulatory compliance through technology and tech-enabled legal enforcement by combining things like watermark and deepfake detection with algorithmic audits and monitoring. 

Recently, in the 37th LAWASIA Conference, Anupam Sanghi presented a White Paper that proposed such a techno-legal framework in the broader context of digital markets and digital competition. The paper argues that current laws governing digital spaces are rigid and reactive and pushes for a hybrid techno-legal framework that blends ex-ante and ex-post measures while highlighting behavioural dynamics as the core assessing factor. 

Gap Analysis

While India has existing legal frameworks that indirectly govern AI, several gaps remain. The absence of a unified AI-specific regulatory framework leaves room for inconsistencies in compliance and enforcement. Additionally, current laws do not fully account for emerging AI risks such as automated decision-making biases, liability in case of AI failures, and cross-sectoral accountability. Addressing these issues requires a coordinated and structured approach that combines ethical considerations, transparency measures, and sectoral regulations.

Deepfakes, Fake, Malicious Content 

AI-generated synthetic media, including deepfakes, presents significant risks in terms of misinformation, identity fraud, and societal harm. The subcommittee recognized that while existing laws provide mechanisms to address cybercrimes and they may even be adequate in some regard in prosecuting malicious synthetic media, they do not comprehensively tackle the unique challenges posed by AI-driven content manipulation. There is a need for clearer legal mandates to ensure AI-generated content can be traced, verified, and regulated effectively. This includes mechanisms like watermarking AI-generated media, robust content labeling, and liability frameworks for AI developers and deployers.

For a deeper understanding into the what, why, and how of deepfakes, you can check out our blog article here.

Cybersecurity Risks

While the IT Act and the DPDP Act mandate responsible data handling and processing, they do not fully address AI-specific cybersecurity risks, such as adversarial AI attacks or unauthorized model access. Even the recently released DPDP Rules fail to address some key gaps with respect to automated profiling and re-identification of personal data powered by AI and ML. The subcommittee observed that integrating AI-focused cybersecurity guidelines within the existing framework would help create a more comprehensive regulatory approach.

Intellectual Property Rights

The ambiguity surrounding AI’s use of copyrighted content raises concerns over fair use, data licensing, and ownership rights. Current copyright laws do not explicitly cover AI-generated works, making enforcement difficult for content creators and AI developers alike. The Report highlights the importance of defining liability for AI model developers, dataset curators, and deployers to ensure accountability in cases of copyright infringement.

AI-enabled Bias and Discrimination 

AI models trained on biased datasets may reinforce and amplify societal prejudices in crucial areas such as employment, lending, healthcare, and law enforcement. Although existing anti-discrimination laws provide broad protections, they do not explicitly outline responsibilities for AI developers in mitigating algorithmic bias. The lack of standard methodologies for assessing fairness in AI models exacerbates these challenges. Establishing frameworks for bias detection, fairness audits, and accountability in AI.

Fragmented Regulatory Approach

The governance of AI in India is currently divided among multiple regulatory bodies, each overseeing AI applications within their respective sectors. This fragmented approach leads to inconsistent compliance requirements and regulatory inefficiencies. A unified regulatory framework is necessary to streamline AI policies across industries, ensuring a consistent and comprehensive governance structure.

Actionable Recommendations

  • Establish an AI Coordination Committee

The first step is a ‘whole-of-government’ approach through the establishment of an Inter-Ministerial AI Coordination Committee, which would unify AI governance efforts across various regulatory bodies. This committee would play a crucial role in developing a cohesive AI regulatory roadmap, fostering collaboration between government departments and industry stakeholders, and monitoring AI risks to ensure compliance with governance principles.

  • Establish a Technical Secretariat

The Report emphasizes the need for a Technical Secretariat under MeitY. This Secretariat would be tasked with mapping India’s AI ecosystem, conducting risk assessments, and developing standard evaluation metrics. 

  • Creation of AI Incident Database 

Another crucial recommendation is the creation of an AI Incident Database, a centralized reporting system that would track AI-related harms and failures – with the OECD AI Incidents Monitor as reference. This repository would enhance transparency, enabling stakeholders to identify patterns of AI-related malfunctions, biases, and risks. The insights gained from this database would help inform policy adjustments and refine future AI governance strategies.

  • Encourage Voluntary Industry Commitments 

The subcommittee also recommends that the government work with private AI developers to encourage voluntary industry commitments for AI self-regulation. These commitments would include transparency reports, model cards, and internal and external risk assessments. AI developers and deployers would be expected to uphold ethical AI standards by implementing compliance measures that mitigate risks and biases.

  • Adoption of Techno-legal Solutions

In addition to these measures, the Report advocates for the adoption of techno-legal solutions to address AI risks. A digital-by-design governance approach should be embraced, leveraging technologies such as watermarking AI-generated content, privacy-enhancing techniques, and automated compliance tools to strengthen regulatory frameworks.

  • Incorporation of AI-specific provisions into the Digital India Act 

Finally, the subcommittee highlights the necessity of incorporating AI-specific provisions into the Digital India Act (DIA). The DIA should include clear legal guidelines on AI-generated content, strengthened dispute resolution mechanisms, and a regulatory framework for mitigating AI-driven cyber threats.

Final Thoughts: A Strong Start but More is Needed

The Report is definitely a good start for AI regulation in India. There are quite a few upsides; the ‘whole-of-government’ approach is exactly the need of the hour, addressing the plague of fragmented regulation. The creation of a Technical Secretariat and AI incident database are also in line with global standards. Most importantly, the Report recognizes the current environment of AI proliferation in India. It understands that the definition of AI is too rigid and that a more dynamic techno-legal approach is required. 

However, there have been obvious misses – the lack of focus on the use of personal data for AI model training and the viability of voluntary, self-regulatory measures. 

For now, we will have to play the waiting game for more nuanced regulatory recommendations.

Share the Post: