top of page
  • Suhana & Anenya

Artificial Intelligence: The Columbus of the Digital Age, Unveiling the World of Entertainment

Introduction

Recently, a stupendous achievement in the Korean-pop industry was the debut of an all AI girl group named “Mave” where the members were created using deep fake and 3D technology making them hyper-realistic while they released a full-fledged album with their main song “Pandora”. This is a self-explanatory example of how artificial intelligence has dwelled into each and every arena of work including the entertainment industry. The use of AI in this industry is multifold as it has left its indentation in the process of music composition, film making, e-sports, virtual reality and setting various social media trends. The computer generated imagery has the potential of bringing mythical creatures, unreal characteristics and futuristic beings into existence as witnessed in movies like RaOne, Adipurush, and Infinity Wars. Furthermore, as the entertainment business makes its way through the murky seas of artificial intelligence, we observe that many artists are bending to the technology rather than the other way around.  The parody films, which included deep fakes of Tom Cruise performing things that were only imagined but are now possible because of artificial intelligence, gave director Robert Zemeicks an idea apparently marked as AI's acting debut in the film "Here" starring Tom Hanks at different stages of his life. The influence exerted by AI in the sports industry can be proved by the massive craze generated by the announcement of the release date of the well-liked online game GTA VI. Many other virtual games have been developed with the aim of providing people with a real time experience using 5D and 7D technology.


Everything Artificial About This Intelligence

While we take into consideration all the fruitful benefits reaped by AI, we can’t forego the quandary which it creates due to the misuse of deep fakes and chatbots. Deep fakes are considered by many a subset of synthetic media which is generated using machine learning or AI. However certain deep fakes can also be used as a warped form of cybercrime under the umbrella of AI which manipulate the data and create fake videos and audios to deceive the public using deep learning techniques. They can be used to harass and blackmail others, defame their reputation, infringe their privacy or create something mendacious just for popularity as in the case where the face of Rashmika, a famous Indian actor, was superimposed on the body of an influencer. They can also be used in sexually explicit content to degrade someone out of revenge similar to what happened with Gal Gadot. Thus, it can be noted that AI which is being created by humans can become the sworn enemy of humans themselves.


A recent development concerning the above mentioned problems can be observed in the form of adoption of the ‘European Parliament Artificial Intelligence Act’. Under this, it was mandated to clearly identify the content generated artificially to inform viewers of its manipulated nature. The disclosure requirement aims to preserve the integrity of information and protect against potential harm while allowing for the continued enjoyment and exploitation of creative works.  Similarly the European Union has also implemented the Digital Services Act, compelling social media platforms to comply with labelling requirements. This measure aims to augment transparency and assist users in verifying the authenticity of media content.


In India, a 73-year-old man encountered a deep fake scam when he received a call from an individual posing as his former colleague, requesting money. The scammer employed deep fake technology to generate a video call, seamlessly replicating the face and voice of the victim's ex-colleague. Unfortunately, the victim transferred money before discerning the deception. The case clearly shows how the creator of a deep fake can manipulate media to encroach one’s privacy, damage one’s reputation, disseminate misinformation or imitate a crime such as financial fraud. Concerning the draft 'Prevention and Regulation of Dark Patterns, 2023' issued under Section 18 of the Consumer Protection Act, 2019 by the Ministry of Consumer Affairs, Government of India, its objective is to safeguard consumers against what is termed as "dark patterns." which are designed to mislead or manipulate users into actions they did not originally intend, subverting consumer autonomy, decision-making, or choice. A similar net of protection has been introduced by the guidelines provided for the large language model (LLM) generative AI.


LLM in simple terms is a form of artificial intelligence which can identify and generate texts. The above mentioned guidelines prohibits from user to “host, display, upload, modify, publish, transmit, store, update or share unlawful content as per rule 3(1)(b) of the IT rules.”. This acts as a tool to create vigilance among the internet users that any meddling with unlawful information would be met with consequences.


The Human-AI Paradox

The use of generative artificial intelligence (AI) in the entertainment sector did not appear overnight; rather, it gradually emerged as a result of other minor adjustments brought about by the many applications of AI being used in various industries.AI is used in the entertainment sector to boost productivity and flexibility, but a human creation without moral implications is like a lemon without its sourness. IPR infringement, stereotyping, illegal data use, and deception are only a few of the obvious problems. furthermore, The human AI dilemma, which describes a scenario in which artificial intelligence (AI) surpasses its creators, humans, is a prime illustration of how AI, which is supposed to be the refiner of creativity, is also preventing artists from carrying out their work. The aspirations of human artists are evident in the introduction of artificial intelligence (AI) as a creative partner, since they argue that AI empowers human ideas. Emotions, values, expertise and experience can yield better results if combined with innovation. The 2023 Microsoft Work Trend Index research makes this point clear and poses the crucial question, "Will AI fix work?" The majority of the polls conducted for this index indicate that humans hope to use artificial intelligence (AI) to share the workload. Although personalizing content to make it more immersive and engaging could encourage creativity in the entertainment business thanks to this AI artist alliance, there is concern that AI could replace human touch and endanger artists' creations. Another notable question being debated in this context is whether artificial intelligence will replace human labor. A further examination of this issue inside the entertainment sector leads us to a conclusion that is comparable to what the notable film maker Taplin is worried about. His main worry, which is represented in his book, is that generative artificial intelligence (AI) would eventually take over the labor of human authors, artists, photographers, and other creative professionals in the entertainment and arts sectors.  Former perspectives on AI saw it as an automated mundane, but the uproar it has caused in the entertainment sector has resulted in strikes by employees requesting job security from AI taking over their positions. During their 148-day strike, members of the Writers Guild of America, or WGA, met often to prevent the entertainment industry's workers from being overwhelmed by the march of artificial intelligence.


The Intellectual Quandary of AI and Ownership

As a marvel of human intelligence, generative AI derives its powers from the human mind hence in terms of how artificial intelligence (AI) functions, one theoretical interpretation is that AI learns from previous content; that is, by identifying patterns in pre-existing material, AI uses that knowledge to create new, original content. Since there is always pertinent data accessible for AI to educate itself on, this method of data generation is almost failsafe. There never has been any certainty regarding the ownership and patency of AI generated content. It has been always pondered over whether AI should be given the authorship of the work autonomously created by it or not. The misuse and misappropriation of intellectual property, often known as intellectual property infringement, is an additional threat.  Currently in today’s scenario, no country provides such rights. Taking an example of India, Section 2 of the Indian Copyright Act, 1957 clearly refers the patentee to be a person. It also defines “inventive step” as a prerequisite for an invention to be patentable, that such invention must not be obvious to a “person” skilled in art. Same is the stance in other countries like UK and Ireland. Recently it was seen that Stephen Thaler had applied to patent the inventions created by AI- DABUS but the same was rejected by the respective jurisdiction (here). A plain English prompt such as ‘paint a picture like Monet’ will give us results which might seem like original authentic work however it’s more or less an imitation of what is already available on the internet. Artists are not given the opportunity to opt out, to consent or to be compensated for the use of their work as a database for AI engineered tools. Generative Ai produces works that are some kind of transformed synthesis of human created work. This process itself sounds illegal if read along the lines of copyright laws. However, if AI was restricted to such rules and regulations then it’s sources be limited to public domain which would hinder the innovation and creativity which AI offers us now. Obtaining permission also appears like a irrational alternative in this case because it is improbable to obtain permission from the 2 billion trillion sources; yet, it is sometimes asserted that licensing will provide a just market for creative works. In the context of a patent application under the Patents Act 1990 a crucial legal question emerged in the case of Thaler v.  Comptroller-General of Patents, Designs and Trademarks concerning the eligibility of an AI program to be designated as the inventor. The Court, in its judgement, established the feasibility of attributing an AI program as the inventor on a patent registration, despite the inherent limitation that legal ownership cannot be vested in the AI entity. The issues in this case involved the question of the 1977 Act providing grant of a patent without a named human inventor. There was also uncertainty in the case of an invention made by an AI machine as to whether the owner, creator and user of that AI machine is entitled to the grant of a patent for that invention. The court concluded that AI could not be considered as an inventor with respect to patent. The challenge inherent in granting AI the capacity to obtain patent rights is intertwined with issues of liability. Endowing AI with rights poses a potential obstacle to consumer protection, as there may be a lack of accountability in addressing any subsequent issues without a clearly identifiable party. Granting patent rights to entities with smeared decision-making raises concerns about transparency. This unpredictability raises questions about the real nature of inventive steps and whether they align with the intended scope of patent protection. AI, lacking ethical consciousness, may lead to unintended consequences or applications that are ethically questionable.


Modern Legal Regime: Sponsored By AI

The world is at a stage of exploration in the case of AI where every second something new, unique or peculiar is discovered. The increasing use of artificial intelligence (AI) demonstrates how ready humans are to adopt and make use of these cutting-edge technology. While artificial intelligence (AI) and machine learning are revolutionizing the entertainment sector, it's crucial to remember that these technologies depend heavily on personal data to function. AI algorithms in this situation jeopardize artists' privacy and their creative output. Furthermore, worries about the data's storage location and accessibility take of equal significance. Because of this, it's critical that authorities acknowledge the fundamental issues with generative AI and provide a clear legal framework to address them. The foundation of the new knowledge-based economy, which has been a key engine of growth during the last 20 years, is all forms of intellectual property rights. Here, we confine our discussion of intellectual property rights to the direct application of copyright and patent laws since they assist us in addressing both the liability and privacy issues. Judge Beryl A. Howell of the US District Court reiterated a ruling in the recent Thaler v. Perlmutter case, stating that a work created solely by artificial intelligence (AI) without human involvement is not protected by copyright. Here, it is evident that the jurisdiction firmly supports the conventional methodology. It is evident that in this situation, providing intellectual property rights to certain computer programmes looks unreasonable because artificial intelligence lacks any solid foundation. Similar concepts have been covered above when it came to patent ownership and rights. With the ongoing dilemma of whether AI should be granted patent rights or not, the Australian court has affirmed AI as an inventor. While adopting a wider perspective of the term “inventor” in line with innovation and advancement, the court claimed that the very objective of the Patents Act, 1990 is to let inventors reap the benefits of their inventions and not even recognizing AI as an inventor would defeat this purpose. Thus, the Court was in favour of Dr Thaler’s contention of AI being given the ownership and deriving the patent entitled to them. Before into dwelling into the question of AI being an inventor, we should tackle the extent of AI’s potential of “invention”. Recently, the convergence of artistic and scientific thinking gave rise to Philyra - an AI product composition system designed to assimilate knowledge on formulas and raw materials to master the art of creating a fragrance. Thus, it can be concluded that AI is better as a tool for invention rather than an inventor itself. AI still necessitates human intervention to discern the task at hand and identify the approach by which it should be implemented. AI today is ‘automatic’ but not ‘autonomous’.


In a judgement passed by the Delhi High Court in the case of Microsoft Technology Licensing LLC v. Asst. Controller of Patents, it was held that if a computer-based invention leads to a technical effect or contribution, it still maybe patentable. But there is no clarity on the concepts of ‘technical effects and contributions’.


The author of a work in India is granted moral rights over it under section 57(1)(b) of the copyright act; however, there are no known instances of this being applied to the internet. While the legal system in India may seem retrograde, there are statutes such as the John Doe Injunction that have been steadily expanded to include internet material distribution and unlawful copying. In a similar vein, compensation may be granted under Section 43A of the IT Act 2000 in the event that sensitive personal data handling negligence results in a data privacy violation. Here, the word artificial intelligence is not explicitly mentioned. It leads us to the conclusion that there is no special legislation governing the regulation of artificial intelligence, even though certain regulations can be applied to decision-making in these situations. According to a study conducted by India's Ministry of Electronics and Information Technology, there are several ethical, legal, and cyber security concerns with artificial intelligence. In a catalogue of verdicts handed down around the world, two fundamental themes emerge: responsibility and privacy. This further suggests that it's past time for pertinent laws to be created in order to solve the difficulties that have already been identified.


Attributing Responsibility: Unraveling the Culprit in AI Misuse

In everyday speech, we always link the people who cause problems to them, but who is at fault in this case? AI businesses? Final consumers? Customers? OpenAI? Or perhaps AI itself? As there is no one person or body accountable, there may never be a definitive response to this question. The European Parliament has stated that “any wrongful acts or omissions by an AI will make its human operator/ manufacturer/ user liable and not the AI itself.” Analysing this statement, if the AI is not to be held liable for any misdeed, it will not be fair and just to provide it the title of ownership if the acts proves to be a boon.



According to the AI Report: Measuring Trends in Artificial Intelligence, the number of cases concerning the ethical misuse of AI are on the rise. The increase is visibly an indicator of the greater use of AI and the consciousness of misuse possibilities.


The inappropriate use of AI applications can result in unwarranted surveillance, involving the monitoring of individuals' activities without their awareness or consent, thus prompting substantial privacy apprehensions. Improper deployment of AI algorithms may contribute to an overabundance of user data profiling, generating intricate profiles without sufficient consent and consequently jeopardizing user privacy. Misapplication of AI technologies might unintentionally unveil sensitive details, including personal preferences, habits, or health-related information, posing a threat to individual privacy.


When AI is misused, a crucial question emerges: who bears responsibility for the consequences? The primary principle dictates that the individual or entity misusing AI should be held accountable for any resulting harm. This underscores the notion that those who exploit AI for malicious purposes or engage in actions leading to negative outcomes are legally liable for the damages incurred. The foundational concept is rooted in the idea that accountability aligns with actions, emphasizing the need to attribute responsibility to the party directly responsible for the misuse of AI technology.


Embracing Tomorrow: Inviting Evolution and Growth

Nevertheless, one thing that must be made sure of is the inclusion of honest dialogue among all of these parties and stakeholders; potential problems that are unlikely to arise should also be taken into consideration. AI must be seen through a human perspective, not just as a means of advancing technology. The rapid advancement of generative AI and its intersectional development with entertainment industry raises ethical, privacy and security issues however it’s crucial to realize that while there are valid concerns, they should not deter us from actively seeking and implementing remedies.  These obstacles if dealt with head on will help us to ensure balance between technological progress and safeguarding human interests.


To tackle the issue of deep fakes, a multi-dimensional approach is much needed inculcating legal, social, educational and technological facets. The most ideal model should rest upon the following foundational principles: identification of deep fakes, mitigation of deep fake occurrences, establishment of a robust grievance and reporting mechanism and propagation of public awareness. 


The most important hurdle is to ensure that the opaqueness created by AI won’t violate the principles of equality, open justice and Rule of Law. The consonance of AI with fundamental rights, data protection and values with sustainable development as their base is very crucial in order to maintain complete fairness, transparency and accountability. What is lacking is a proper plan of action because it seems like every country is just in a hurry adopting measures like AI to be a part of the race of power among the countries. Liberty should never turn into a license, so there should be limitations on the use of AI in the entertainment sector. The values on which the entire concept of law stands are good faith, equity and natural justice which will be hard to maintain with the use of AI because there is no precision as to how the legal framework would actually operate in this manner. The control board policies should also be completely rewritten in order to offer solutions that are appropriate, encourage moral behaviour, and encourage the responsible use of artificial intelligence. Recently, the European Union reached an agreement on the AI act which included various facets like safeguards on general purpose AI, limitation for biometric identification systems, right of consumers to get complaints launched and fines imposed on countries in case of noncompliance. In June 2020, the global alliance on artificial intelligence got underway. This global project is accessible to all nations. A move like this will undoubtedly promote uniformity and benefit nations with low levels of AI awareness and inadequate legislation to address the technology. The four main areas that the working groups have chosen to concentrate on are commercialization, innovation, the future of work, data governance, and responsible artificial intelligence. Such programmes also lay the groundwork for the effective incorporation of generative artificial intelligence in the entertainment sector. Since the main goal of GPAI is to promote the development and application of artificial intelligence in a way that is "human centric," it is argued that because of this artists working in the creative sector will encounter innovation in a safe and responsible way. As is clear from the discussion thus far, the Thaler Group has been the target of numerous lawsuits in various courts for their AI-assisted invention. In response to his attempt to have AI recognised by patent laws as a legitimate patent holder, Stephen Thaler has received a range of responses. AI can digest data quickly and access thousands of databases in a couple of seconds. These capabilities demonstrate how difficult it would be for human artists to accomplish such task alone. As a result, the application of artificial intelligence (AI) in the entertainment sector has the potential to be both a blessing and a curse. An invention created by DABUS, an artificial intelligence , was given a patent in South Africa. This turned out to be a huge win for Thaler because, as was previously mentioned, he had been turned down for patents by the US, the UK, and the EU. The reality is that South Africa lacks a strong framework for examining patents, which runs counter to the notion that this ruling validates AI's total ownership. There has been no legal development of intellectual property laws in South Africa. In fact, this ruling underscores how inadequate South Africa's IPR rules are in comparison to those of other, more industrialised nations. Additionally, the UK found that as DABUS is not a person, it is not eligible to be the inventor under section 7(2) of the UK Parents Act, which states that an application does not fall under this provision and that a patent for an invention should be awarded principally to the inventor or joint inventor. It is necessary to have a policy along these lines. In order to guarantee that human inventors maintain primary ownership rights, the goal should be to create a transparent and moral framework for ownership of intellectual property created by artificial intelligence (AI) or a computer programme in collaboration with human artists in the entertainment sector. Contract agreements can clearly define ownership rights in cooperative ventures while taking into account each party's contributions. Contracts ought to state that the principal ownership rights go to the human inventors. Such a strategy will ensure that human inventors retain their proper ownership and credit for their contributions to the entertainment business while also acknowledging the role of artificial intelligence in the creative process.


With the global partnership on artificial intelligence summit scheduled to take place in New Delhi and the Ministry of Electronics and Information Technology (MeiTY), government of India assuming the role of lead council chair, it is critical that we understand how India views AI technology and its role. With a dedicated website indianAI.com to address all questions regarding AI, India prioritizes problem solving over economic growth, as also stated in the DPIIT’s AI task force 2018 report. AI has also changed the Indian entertainment industry, as will be seen with the impending release of Monica: An AI Story, a Malay film and India's first artificial intelligence film. It is crucial to know if Indian laws are ready for these kinds of future changes. It is implied by N Aayog's national strategy for artificial intelligence that the Indian government aims to strike a balance between the greater good and the financial impact of AI. However, it blatantly disregards the entertainment industry. Furthermore, because Niti Aayog is merely a think tank and lacks the authority to draft legislation, it is up to the legislature to fill in the gaps left by proposed laws and transform them into urgently needed legislation. It is hoped that the GPAI summit will produce fundamental rules and guidelines to support the inclusive and social growth of technology in the entertainment sector worldwide as well as in India.

 

This article has been authored by Suhana and Aneya, students at the Hidayatullah National Law University, Raipur. This blog is a part of RSRR’s Blog Series on 'Traversing the Intersectionality of the Entertainment Industry and Generative Artificial Intelligence', in collaboration with The Dialogue.

Hozzászólások


bottom of page