Meta’s Privacy Policy Change: Comparative Legal Perspectives from EU and India
- Stuti Smruti Mishra
- 2 days ago
- 8 min read
Introduction
Artificial Intelligence (hereinafter “AI”) is seeping into our regular lives through its smooth integration into social media. This global phenomenon has raised issues pertaining to data privacy in relation to AI training. As highlighted in The Social Dilemma, the documentary “If you’re not paying for the product, you are the product” perfectly captures the essence of the rising meta-driven concerns about data privacy.
Meta has been found running into regulatory trouble in the European Union (hereinafter “the EU”). At the heart of this issue is their updated privacy policy. The updated privacy policy was scheduled to go into effect worldwide on June 26, 2024. The policy was updated due to the new AI features that Meta introduced across its social media platforms. As per their updated privacy policy Meta can use its users’ public data, including their personal information, pictures, posts, and comments, to train its AI.
In the United States of America, these features have been incorporated since September 2023. In the EU, several lawsuits have been filed by organizations like “None of Your Business” and “Test Achats”, resulting in widespread apprehension. Following these disputes, the Irish data privacy regulator directed Meta to delay their plans in the EU, resulting in an indefinite postponement. However, that is not the case for India as they rolled out the AI features as planned.
Meta defended the update stating that the new policy aligns with local privacy laws. Moreover, they asserted that AI training on publicly accessible personal data is common approach industrywide and is not practiced solely by them. Therefore, it comes as no surprise that even OpenAI and Google are facing lawsuits in the EU for their data collection practices. Thus, it raises concerns about not just Meta’s policy change, but the industry practices itself.
This article analyses the above-mentioned issue and is limited to the jurisdiction of the EU and India in the backdrop of - the General Data Protection Regulation (hereinafter “GDPR”) and the Digital Personal Data Protection Act (hereinafter “DPDPA”) along with the recently notified Digital Data Protection Rules, 2025. (hereinafter “Draft Rules”) respectively.
EU-GDPR
If the purpose or processing of certain data collected by the data controller (hereinafter “controller”) changes, the original consent is no longer specific or informed enough. However, under GDPR apart from “consent” another ground for lawfully processing data, is laid out in clause (f) of Article 6, called “Legitimate Interest”. It allows the controllers to process data which is collected from the data subjects without their consent, as it’s a standalone ground of processing as per Article 6(1)(f). In the face of widespread criticism, Meta maintained that the processing of data for training of Meta’s AI forms a legitimate interest under Article 6(1)(f).
When data is not obtained from the data subjects directly, they must be informed as per Article 14. Therefore, the change to Meta’s privacy policy was notified across the EU. Meta must furnish supplementary information to carry out fair, equitable, and transparent processing of user’s data. One such disclosure is under Article 14(2)(b), making Meta legally liable as the controller to disclose the legitimate interests pursued by them or a third party.
This doesn’t imply merely communicating that the controller will be processing data under the provision of legitimate interest. Simply determining certain processing to be of legitimate interest does not automatically mean that the controller can rely on it. They must be specific about the reason that deems the concerned processing to be a necessity and then communicate the same to the concerned data subjects.
Before the Meta’s policy plans were halted from being applied, they provided the users with the option to opt-out of AI training. However, the users had to cite reasons and justify why they were objecting, which garnered major disapproval. Specifically in the EU, a cursory examination of the provisions indicates that when a data subject exercises their right to object, the controller bears the burden of providing sufficient legitimate grounds for the processing. It is the controller’s responsibility to show the interests, rights, and freedoms of the data subjects do not override the interest of the controller. Therefore, while objecting, asking the data subjects to justify why they want to opt out blatantly reverses the intention of GDPR.
Legitimate Interest Assessments could be used by controllers to demonstrate sufficient grounds for the processing under legitimate interest and makes it easier for them to meet their obligations once a data subject invokes their right to object. The Legitimate Interest Assessment consists of three tests. The three tests help the controller justify the impact of their legitimate interest on individuals’ rights and they are- the purpose test, used to identify the legitimate interest; the necessity test to show the requirement of processing; and finally, the balancing test. In the balancing test, the controller must balance their interests against the interests, rights, and freedoms of the data subject. Whether Meta communicates any such demonstration in compliance of Article 14(2)(b) on the face of the objections raised by data subjects is yet to be seen.
India- DPDPA
Legitimate interest under GDPR has a much broader scope in comparison to the concept of ‘Certain Legitimate Uses’ under DPDPA. Exemption from the need of consent under ‘Certain Legitimate Uses’ can be invoked only under the circumstances mentioned in Section 7. These circumstances include- processing personal data voluntarily shared by the data principal for specified purposes if the data principal does not object, processing to comply with law or judicial orders, for employment purposes or for responding to certain specified emergencies like disasters, epidemics, or even medical emergencies. Thus, unlike legitimate interest, certain legitimate uses under are neither exhaustive nor flexible. In the absence of a provision that is equivalent to legitimate interest, DPDPA does not provide the equivalent protections either, i.e., - the right to object.
Meta’s new updated privacy policy does not violate any provisions under DPDPA since, unlike GDPR which protects publicly available personal data, it does not. Section 3(c)(ii) excludes personal data available publicly when data principle or an obligated person has made such information publicly available.
The language of the exemption is also concerning, specifically - “causing personal data to be made publicly available”. It not only mentions that processing any personal data that’s publicly available, does not require the consent of data principle but also suggests that the data principle need not even be aware that their personal data is publicly available. Therefore, there is no recourse available to a data principal apart from the option to object offered by Meta which does not apply to retrospective processing.
The grievance redressal mechanism in India pertaining to such data privacy concerns are inadequate. As per Section 13 data fiduciaries must provide individuals with an accessible grievance redressal mechanism to resolve issues related to data fiduciaries' obligations or enforcement of data principal rights. Data Protection Board’s assistance can only be sought if a data principle’s grievance remains unsolved. This issue is further accentuated by the Draft Rules as it requires entities to prescribe their own timeline to respond to grievance. Furthermore, it does not provide an upper limit for the grievance redressal timeline. Therefore, in the event where a data principle becomes aware of the public availability of their personal data which they haven’t made public themselves and is being used for any purpose that they did not consent for, approaching the data fiduciary might also render no result.
India as the most populous country in the world unarguably creates a large amount of data from digital activity on various platforms. As of April 2024 there are over 954.40 million internet subscribers owing to development of internet connectivity in the rural regions. With greater access to internet, the people of India also gained greater access to social media platforms like Instagram and Facebook. Therefore while it is not surprising that India has emerged as the largest market for Meta AI, the lack of stronger regulatory framework for data protection is even more concerning.
On the face of the vastness of the data created and utilized, India fails to provide sufficient legal protection. Since DPDPA does not protect publicly available data, personal or otherwise, it could potentially make India susceptible to becoming the ultimate ‘Gold Mine’ of freely available data for AI companies aiming to train their models. Not only AI companies but the absence of any protection to publicly available data potentially means that it could be used by any individual or entity from anywhere in the world without any accountability. It could be used for any purpose without the knowledge of the data principals attracting no consequence under DPDPA, which further erodes data privacy of the Indian citizens.
Conclusion
The contradictory regulatory approaches as laid out by GDPR and DPDPA show the complexity of data privacy. In case of legitimate interest apart from the purpose of digital marketing, the controllers would have the option to not accept the objection raised by a data subject. However, that does not mean in any way that the user must justify their objection, rather Meta must demonstrate that it has a legitimate interest.
The justification for using personal data without the consent of the data subjects on the pretext of legitimate interest demands rigorous compliance. Conducting a Legitimate Interest Assessment helps in a well-reasoned demonstration. In the current landscape, a Legitimate Interest Assessment needs to be viewed as a censorious tool for upholding accountability rather than a mere procedural formality, even when not enforced by law.
This privacy policy issue unfolds in India a lot differently than EU due to the exclusion of publicly available data from DPDPA’s ambit. In India, Meta is not legally obligated to provide an opt-out option, and to entertain any objections they receive. Furthermore, the grievance redressal mechanism under DPDPA is also riddled with lacunas.
Ideally, as in the case of GDPR, DPDPA should also extend protection to all kinds of data without excluding publicly available data from its ambit. However one of the challenges of bringing publicly available data of personal nature under the ambit of DPDPA is cited to be the increased complexity is legally processing data, giving higher discretion to data fiduciaries at the cost data principles. Therefore, a crucial step forward would be to ensure that the data principals are informed if any of their personal data is publicly available.
A possible solution with respect to the awareness of data principles regarding the usage and processing of their personal data and publicly available data could be data traceability. It could empower data principles to be aware of the whereabouts of their data and it’s processing. This would provide them at the very least with the knowledge of how their data is used and whether they would like to exercise their rights under DPDPA such as their right to erasure of data under Section 12.
The current data protection regime also fails to provide meaningful protection to personal data as well. The mere public availability of personal data even without the knowledge of the data principal, excludes it from DPDPA. It renders the requirement of consent redundant. Therefore, in the current regime there needs to be clarification with respect to what it would mean to cause personal data to be made publicly available. It could also be accompanied by laying out the circumstances under which such data would not be afforded protection under DPDPA.
Furthermore, a statutory reasonable timeline for grievance redressal to be followed by data fiduciaries needs to be provided. Since the draft rules or DPDPA fails to do so, the data fiduciary could notify any amount of time they deem fit for grievance redressal. The absence of a reasonable timeline will reduce grievance redressal under DPDPA to a mere formality. In its current form DPDPA explicitly recognizes the rights of individuals to protect their data. On the other hand, the statutory standard of compliance for lawful processing of publicly available data acts as a challenge. Moreover, the lack of adequate grievance redressal mechanism contributes to the issue. Therefore, it’s not far-fetched to conclude that the Indian data privacy regime currently is only limited to providing meaningful recognition but not significant protection of individuals’ rights.
This article has been authored by Stuti Smruti Mishra, a student at the National Law University, Odisha . This blog is part of the RSRR's Rolling Blog Series.
Comentarii