Celebrities depend on their reputation and their personality to earn their livelihood. It is due to their fame that celebrities can influence people, especially younger generations. Therefore, it becomes imperative for celebrities to ensure that their influence and repute are shielded from misuse and commercial exploitation. A very recent incident, which involved the creation and transmission of a digitally manipulated video known as a “Deepfake” to defame celebrities, has again led to discourse regarding the dangers of Artificial Intelligence (“AI”), which are currently inadequately regulated. In the aforementioned video, the face of popular actor Rashmika Mandanna was morphed into a video originally created by a British social-media personality named Zara Patel, essentially violating the actor’s privacy as well as defacing her public image. The usage of her likeness and images without her consent infringes upon her agency and right to control her own identity.
Introducing Deepfakes
Deepfakes are a form of AI and are a portmanteau for the words ‘Deep Learning’ & ‘Fakes’. The term Deepfake was coined when A Reddit user in 2017 with the name ‘Deepfake’ posted hundreds of videos of pornographic material with the morphed faces of celebrities. Deepfakes use advanced AI and deep learning techniques to generate content that is seemingly real. It is an AI synthesised media that uses facial recognition and deep learning to swap and replace the likeness of one human with another in audio, video or images convincingly. It also includes audio deepfakes which are artificially generated audio that can mimic another person’s voice to create convincing speech. Deepfakes have both negative and positive implications, depending on its intended usage. The recent example of popular celebrity, Taylor Swift’s explicit deepfakes flooding social media highlights the potential negative ramifications of such AI-generated media.
The potential for damage caused by such manipulatory AI techniques can be better understood through a practical example. If an Instagram reel video created using AI and deep learning portrays Amitabh Bachchan promoting a particular product for an increase in height, his followers may be deceived and spend their hard-earned money on such a product. It can potentially damage the celebrity’s reputation. Recently, the mayors of various European cities were duped into holding video calls with a deepfake version of the Mayor of Kyiv. Therefore, the question arises: How can public or political figures protect themselves and their fans from exploitation?
The power of political figures can also be misused by circulating deepfakes of known politicians through social media platforms. It can have a negative impact on voter behaviour and can also be used to promote violence and persecute certain communities. For instance, a video showcasing a sitting minister of the Telangana government appealing to voters to vote against the present government was circulated throughout WhatsApp. It later turned out to be a deepfake meant to influence voter choices.
Considering the fact that deepfakes have no geographical barriers, it is pertinent to question the sufficiency of the present international laws and regulations to regulate this menace. The question, also, is whether countries are prepared for the damage caused by such artificial intelligence-created content overseas in other nations. While celebrity rights are well-developed in some jurisdictions, the discourse and jurisprudence supporting the creation of such a regime in India is still in a nascent stage.[1]
An Indian Perspective to Personality Rights
The concept of personality rights stems from two inherent rights - the right to privacy and the right of publicity.[2] The right to privacy, when available to celebrities, protects the celebrities from an unauthorised usage of their likeness. It protects individuals from having their images, likenesses or voices used for commercial purposes non-consensually, thus constituting the right of publicity. Such unauthorised usage is analogous to passing-off, i.e., when the exploitation of a well-known trademark without consent is actionable. While various jurisdictions have codified personality rights, Indian laws are scattered within the legislative framework for intellectual property rights (“IPR”). The copyright act protects the creations of authors, but it does not particularly protect the likeness of a public figure. Copyright laws fail to address the needs of celebrities with respect to deepfakes, as celebrities often do not own the copyright of the clips used to create the deepfake. In America, recently, however, popular figure Kim Kardashian used copyright claims to remove a satirical deepfake created using her videos. Since, celebrities mostly do not have the copyright of content they have featured in, which may be used to create the deepfake, it is an incomplete solution. The Trademark Act 1999 can be used to protect a celebrity’s likeness, but there is no express mention of celebrities and their rights. The Delhi HC, however, on a number of occasions, has championed personality rights by re-affirming a celebrity’s right to protect their likeness from being exploited without their consent.
Titan Industries v. M/S Ramkumar Jewellers dealt with protection of personality rights.[3] Amitabh and Jaya Bachchan endorsed the Plaintiff company Tanishq and the Defendant made use of the photographs which were created in pursuance of the Plaintiff’s marketing campaigns without its consent. Since the IPR in the aforesaid campaign vested with the Plaintiff, the Defendant was sued for misappropriation of personality rights, passing-off and copyright infringement. The Court held that the personality rights of the celebrities in the instant case were being misappropriated without their consent. Similar precedents protecting the public image of celebrities have been set in cases filed by popular figures like Sourav Ganguly & Daler Mahendi. Hence, the remedies for unauthorised usage of celebrities’ likenesses are similar to those for IPR infringement.
This conundrum becomes even more intricate when analysing whether the Courts are sufficiently equipped to protect personality rights, in light of AI-generated deepfakes. The Anil Kapoor case can serve as the starting point for much-needed judicial clarity regarding unauthorised/ unlicensed AI-generated content. Anil Kapoor sought legal action after noticing several websites with AI generated content using his voice, likeness and images.[4] The Delhi High Court categorically held that unauthorised usage of a celebrity’s likeness is misleading for consumers and violates the former’s privacy and publicity rights, which prohibit such commercial exploitation. It can prove to be a considerable judicial victory for famous personalities against unfettered commercial exploitation of AI. Such judgments can also aid politicians in protecting their likenesses from commercial use; however, they fail to protect them from damage to reputation through deepfakes spreading false information. Further, Deepfakes of celebrities can be used for committing crimes like extortion and fraud; celebrities should be protected from such usage of their likeness.
Analysing the Existing Framework
From the previous discussion, it becomes clear that other than the IPR regime, there does not exist a specific and particular body of laws which can afford complete protection to a celebrity’s rights in reference to the reaction of deepfakes and other forms of AI-based manipulation. The Consumer Protection Act, 2019 may cover deepfakes since it has provisions penalising misleading advertisements. In conjunction with the Information Technology Act, of 2000, specifically Sections 66(A) and 66(D), which combat publishing of fake information and cheating through the use of computer sources, respectively, there may exist sufficient protection against deepfakes being used for misleading advertisements. However, certain other aspects regarding deepfakes remain untouched by law. For example, the individuals behind the creation of deepfakes that damage the public image of influential personalities are usually unknown, the courts are unable to impose monetary penalties. It was also noticed that there was no clear regulation of how social media platforms deal with deepfakes, especially in terms of removal.
The IT rules 2021 read with the newly enacted criminal code Bhartiya Nyaya Sanhita (“BNS”) that replaced Indian Penal Code can further aid the discussion in analysis of regulation of deepfakes. Sec 334(4) of the BNS penalises forgery using electronic record documents that can be interpreted to include deepfakes within its ambit. The Rashmika Mandanna incident has caused the Indian government to issue a directive asking social media platforms to remove any AI-generated content within 24 hours of receiving a complaint. The directive will act as a step towards the recognition and removal of non-consensual AI-generated content. Further, social media platforms like Meta have attempted to prevent AI-generated videos; however, despite Meta’s prohibition on artificially altered videos making use of public figures’ likenesses to deceive users, both Instagram and Facebook are flooded with such content which bypasses the algorithmic restrictions owing to the technological inadequacy in Meta’s moderation framework, a problem which has also been observed on the Google-owned video-sharing platform YouTube.
International Approaches on Dealing with the Deepfakes Crisis
The deepfake conundrum can create challenges within the international legal framework. At this juncture, it becomes important to shift perspectives towards jurisdictions such as the USA and EU and the regulatory approach therein towards deepfakes. The EU has an Artificial Intelligence Act in place which, instead of outrightly banning deepfakes, imposes obligations upon creators to declare which content is artificially generated. Despite its clear intent, on paper, enforceability again becomes a key issue, especially when dealing with deepfakes generated outside the EU.
In the USA, similarly, there is no single regulation prohibiting or regulating deepfakes. Rather, different states have approached the matter differently. For instance, Texas has banned pornographic deepfakes, and California has banned deepfakes, which may affect the outcome of elections. Both EU/US laws are unable to address challenges due to deepfakes sufficiently. However, immediate regulations to control the unfettered dissemination of deepfakes are needed in India.
Threats of Deepfakes and the Role of a Regulatory Body
The international impact of a Deepfake using popular political figures can be highlighted through the example of the video that surfaced online of President Volodymyr Zelenskyy requesting Ukrainian soldiers to abandon the fight against Russia. Conspicuous differences in voice and accent led the world to recognise that the video was an AI-generated video. It marked the first usage of deepfakes to spread disinformation at the international level to weaponise public opinion.
Considering that deepfakes can have a cross-border impact and hamper international peace, the role that the United Nations can play in alleviating AI threats becomes imperative. The UN, in a recent report has flagged AI-generated media content as a serious threat, and the UAE minister has called for a global coalition to tackle the challenges created by AI. AI-generated content has the potential to become an international security concern. Such content should be regulated at the international level with a focus on the platforms like social media where such content is disseminated. The creation of an international regulation or a UN resolution could help a trusted information environment thrive, which is the need of the hour. A possible solution lies in regulating deepfakes through cooperation and international conventions so as to uniformise rules for countering malicious deepfakes on a global/ cross-border level. Similar to the Budapest Convention on cyber-crimes, deepfakes can be regulated internationally through a uniform penal framework. However, such a framework would face jurisdictional issues and intensive policy research to ensure its effectiveness.
A regulatory body within India responsible for aiding social media platforms in identifying deepfakes, responding to complaints and ensuring media literacy among the masses would go a long way in curbing the unregulated spread of malicious deepfakes. Social media platforms can contribute by strengthening the system of identifying videos that are security concerns, especially those involving celebrities or political figures. Including videos that have the potential of creating conflicts or swaying public opinion, which might be AI-generated should be carefully analysed.
The Indian government recently sent an advisory asserting that responsibility lies on social media platforms to identify and remove deepfakes videos. The Ministry of electronics and information technology also recently reported considering to amend the IT (intermediary guidelines) rules to define deepfakes and make it a mandatory requirement for intermediaries not to host such content.
Concluding Remarks
The obstacle in regulating deepfakes lies is extremely multifaceted, ranging from difficulty in identifying and punishing the infringing individuals to inadequate moderation on social-media platforms for dealing with deepfakes and artificially manipulated videos of public figures which are intended for deceptive purposes. The exponential advancements in the field of AI continue to create difficulties in discerning real from fake. A possible solution lies in regulating deepfakes through cooperation and international conventions so as to uniformise rules for countering malicious deepfakes on a global/ cross-border level. Similar to the Budapest Convention on cyber-crimes, deepfakes can be regulated internationally through a uniform penal framework. A regulatory body within India responsible for aiding social media platforms in identifying deepfakes, responding to complaints and ensuring media literacy among the masses would go a long way in curbing the unregulated spread of malicious deepfakes. Social media platforms can contribute by strengthening the system of identifying videos that are security concerns, especially those involving celebrities or political figures. Videos that have the potential to create conflicts or sway public opinion, which might be AI-generated should be carefully analysed.
Social media platforms, governments of nations and international organisations can come together to prevent AI-generated content from turning into a security threat in the digital era. Discussion of the rights of popular figures at the international level would help create legal mechanisms for protection from AI-generated content. AI manipulation specifically uses popular figures to shift the narrative in our information driven society today. Social media platforms can play a significant role by noting and identifying content uploaded of celebrities from unrecognised accounts, which might have the potential to damage the celebrity’s reputation, consumers and the general public. Upon observing that the content is a deepfake, it should be removed, and viewers should be informed of its veracity. Thus, acknowledging the power that such popular figures hold and recognising their rights on social media platforms can aid in creating a just information society.
[1] Garima Budhiraja, Publicity rights of celebrities – An analysis under the intellectual property Regime, NALSAR Student Law Review 7. NALSAR Student L.Rev. 86 (2011)
[2] Bisman Kaur & Gunjan Chauhan, Privacy and Publicity – Two facets of Personality Rights, Remfry Sagar, Brands in the Boardroom, 2011 <https://www.remfry.com/wp-content/uploads/2017/11/privacy-and-publicity-the-two-facets-of-personality-rights.pdf> Accessed 28th April, 2024.
[3] 2012 SCC OnLine Del 2382, para 15. 19
[4] 2023 SCC OnLine Del 6914, para 39 40 41 42
This article has been authored by Namah Bose, a fourth-year student at Rajiv Gandhi National University of Law, Punjab. This blog is a part of RSRR’s Blog Series on 'Traversing the Intersectionality of the Entertainment Industry and Generative Artificial Intelligence', in collaboration with The Dialogue.
Comments