As part of the Scribere program offered by TechReg Bridge, Scribere Samyukta under the mentorship of Anupam Sanghi, prepared a Quick Guide to the Digital Personal Data Protection Rules, 2025. Encompassing everything from the procedural history of operationalizing data privacy as a fundamental right to a rundown of the Consent Manager Framework, the Guide provides a comprehensive overview of the key provisions of the Draft Rules as well as identified regulatory gaps and explored potential fixes.
While the Rules are a forward-looking attempt to safeguard personal data, they do not fully address emerging yet critical challenges posed by new technologies in the evolving digital landscape. In this bulletin, we explore these gaps – particularly related to Artificial Intelligence (AI) and Machine Learning (ML) – and how they can be bridged, with a view to drawing lessons from the European Union’s General Data Protection Regulation (GDPR), which has had a significant impact on data protection frameworks globally.
At the outset, these Draft Rules remind us that governance has become more and more important to derisk digital platforms from regulatory grey areas. In terms of digital markets, a pertinent subject in connection to digital data protection, a techno-legal framework has to be looked at from a multi-stakeholder point of view.
Anupam Sanghi presented a White Paper at the 37th LAWASIA Conference, Kuala Lumpur analysing such gaps in India’s tech regulation. The paper presents a hybrid Techno-Legal Framework that combines legal and technological tools amongst other measures to ensure fair and effective governance.
As part of this TB Quest, we tackle these pertinent questions –
- Are the DPDP Rules keeping up with AI-powered profiling, or are we flying blind?
- Is your “anonymous” data really anonymous, or a re-identification disaster waiting to happen?
- When AI makes unfair decisions, who’s responsible – and who pays the price?
The Rise of AI and ML in Data Processing
AI and ML are increasingly being used to process personal data, enabling organizations to predict user behaviour, personalize services, and automate decision-making. While these technologies provide immense value, they also raise privacy concerns, particularly regarding automated profiling and the potential to re-identify anonymized data. An example of the egregious violation of privacy would be the surge in deepfakes and other synthetic media, powered by increased automation, ready availability of user data, and sophisticated machine learning capabilities.
Automated profiling refers to the use of AI and ML technologies to assess personal data and predict individual behaviours or characteristics, while re-identification involves the re-association of anonymized data with specific individuals, often through advanced machine learning techniques.
What is Missing in the DPDP Rules Regarding Automated Profiling and Re-identification?
- Limited Focus on Profiling in the Context of AI and ML
The DPDP Act and Rules focus extensively on data collection, storage, and consent, but there is relatively little emphasis on the regulation of automated decision-making processes. The DPDP Act does define ‘automated’ as any digital process capable of operating automatically in response to instructions given and ‘processing’ to include wholly or partly automated operations. However, the Act is vague with respect to specific rights attached to this automated profiling.
Section 9(3) of the DPDP Act prohibits targeted advertising based on the behavioural monitoring of children. There is little respite for adult subjects, even less in the context of AI and ML. The Rules do not clearly specify whether individuals have the right to object to automated profiling or how such profiling should be transparent to users. As a result, individuals may be subjected to decisions based on algorithms that they have no visibility into or control over.
- Re-identification of Anonymized Data
While the DPDP Act and Rules recognize the importance of anonymization as a privacy-preserving measure, they do not address the issue of re-identification with sufficient detail. The DPDP Act does not specifically mention or exclude anonymized data from its scope. Yet, if it can be demonstrated that the data does not identify an individual – either by itself or when combined with other information – it is likely to be exempt from the provisions of the DPDP Act. However, as AI and ML algorithms continue to evolve, so does the ability to deanonymize data. With cross-sector communication capabilities becoming highly sophisticated, it is a huge miss that the DPDP Rules fail to explicitly outline handling of re-identification risks.
- Insufficient Regulatory Framework for Accountability
The DPDP Rules largely place the onus of compliance on data fiduciaries, but they do not fully define accountability in terms of how AI and ML systems must be audited or evaluated for potential biases. For instance, if an AI-driven system makes discriminatory decisions based on personal data, there is little guidance on how to ensure transparency, rectify biases, or guarantee fairness in these automated processes
Bridging the Gap: Recommendations
To address these emerging concerns, there are several ways the DPDP Rules could evolve, borrowing from existing frameworks and adopting new practices.
- Explicit Regulation of Automated Profiling and Decision-making
The DPDP Rules could include provisions similar to Article 22 of the GDPR, which gives individuals the explicit right not to be subject to automated decision-making, including profiling, if these decisions produce legal or similarly significant effects on them. The wording is vague enough not to be a massive compliance burden while also allowing for a variety of scenarios to be addressed. Including such provisions in the Indian framework would provide individuals with stronger protections against automated decision-making, including mechanisms for individuals to contest or appeal such decisions.
- Stronger Protections Against Re-identification
The Rules should define anonymization more robustly and introduce mandatory requirements for ensuring that anonymized data cannot be re-identified. This could include obligations for Data Fiduciaries to assess the risks of re-identification, particularly when they use AI to correlate anonymized data with other datasets. Furthermore, the DPDP Rules could require companies to implement strict access controls and encryption protocols to prevent unauthorized re-identification efforts.
- Transparency and Accountability in AI and ML Systems
The DPDP Rules could mandate that organizations conducting automated profiling or deploying AI and/or ML technologies for decision-making be subject to regular impact assessments. These assessments can also ensure fairness by identifying and eliminating discriminatory biases within algorithms. This could be included within the Data Protection Impact Assessments (DPIAs) required by Rule 12, albeit, extended to all Data Fiduciaries and not just significant ones.
However, this requirement toes the line between accountability and compliance burden. At the CCAOI conference, we discussed how the current DPIA framework poses a massive compliance burden on MSMEs due to its vague requirements and inability to grasp the reality of the compliance environment in the country. Thus, by including AI-specific impact assessments, the DPDP Rules could ensure greater transparency and accountability in how AI systems are deployed but only if the language is made sufficiently clear without any ambiguities.
Conclusion
The DPDP Rules, 2025, represent a crucial step toward safeguarding personal data in India. However, as AI and ML technologies continue to evolve and permeate every sector, these rules may become outdated before they are even enforced. Failing to address key factors that influence the market not only blind side the relevant sectoral regulator but also risks oversight by others. For example, Big Tech entities have access to vast amounts of data enabling them to be entrenched in the market. Failing to address the implication of this data hoarding beyond the barebones of privacy would mean missing out on key insights about market structure and concentration.
By drawing lessons from the GDPR and implementing more comprehensive yet practically flexible provisions around transparency, accountability, and individuals’ rights, India can ensure that its digital privacy landscape remains robust and forward-looking. As these technologies continue to reshape industries, both regulators and organizations must stay vigilant, balancing innovation with the need to protect fundamental rights in an increasingly digital world.In furtherance of our commitment to sustainable and harmonious tech regulation and policy, Scribere Samyukta and Anupam Sanghi attended the CCAOI Manthan: A Stakeholder discussion on the Draft Digital Personal Data Protection Rules, 2025. Quite a few pertinent issues were discussed such as the practicality of the verifiable consent mechanism, the need for materiality thresholds, ambiguity of language surrounding cross-border data transfers and SDF classification, and the disenfranchisement of disabled persons. For a complete roundup of the discussion, you can access the summary report here.