top of page
  • Yashaswi Belani & Jash Botadra

Legal Paradigm Shift in Corporate Governance with the Introduction of Artificial Intelligence

Introduction 

Before we understand the functioning of AI in corporate boardrooms, it is imperative to discuss the role of the Board of directors (“BOD”)  in a corporation. The BOD is responsible for ensuring corporate governance by defining a company’s vision, mission, monitoring the appointment of senior-level executives and evaluation of their performances. They act as fiduciaries of the stakeholders and shareholders. Data-driven decision-making is the primary role played by AI in corporations. AI analyses the data and produces the output relating to quarterly performances, strengths, weaknesses and specific executives that may need close attention. However, the achievement of these outputs requires human-machine collaboration to operate, which are bound to give rise to new dimensions of liabilities which ought to be explored and analyzed.


Analysis

Vicarious Liability

The doctrine of Vicarious Liability is said to have been applied in a case where the employer is held liable for the wrongful acts of his employee. It based on the Latin maxim, “qui facit per se alium facit per se” that translates to, “He who does the act through another is deemed to do it himself”. The essentials of the doctrine are –

1. There must be a relationship between the two parties – the employer and the employee.

2. The wrongful act must be related to the relationship of the parties.

3. The wrongful act must be done in the due course of employment.


In order to accord vicarious liability to the machine, the status of employee has to be given to AI for which it is necessary to assign it the identity of a legal person. On the instance of incorporating AI into decision-making process of the business, the relationship between the AI and the corporation would be established. Further, the wrongful decisions taken by AI would affect the business. Thereby affecting and relating to the employer. Since AI would be solely dedicated to working of the corporation at all times and making analytical decisions, the machine would be in course of employment at all times.


Therefore, all the basic ingredients are fulfilled. However, all the components of either of the two sub-heads under vicarious liability i.e. – contract of service and contract for service are not fulfilled.


Under contract of service, the employee is subjected to control of the employer. But in the case of AI, the machine cannot be controlled completely by its operators for decision-making because of its Black-Box Algorithm. It refers to the lack of ability to completely figure out an AI’s decision-making process and the incompetency to anticipate it.


On the other hand, contract for service cannot be established because the functioning of AI is not entirely independent. It is contingent on the input data that is fed to the AI by the corporation.


Test of Control

It is one of the traditional tests used to determine the extent of vicarious liability. Here, the employer not only delegates the work to the employee but also gives the direction for the methodology to be adopted. However, the only phase where there is any scope for direction relating to methodology would be in terms of designing of software, which cannot be considered an instruction directly to AI with respect to the task allotted to it. This is not control because there is no direction from the BOD, rather this would be processed through a software engineer. Therefore, the rights and the ability to control the AI, which is an essential ingredient of the doctrine, will be provocatively be missing from the present equation.


It may be argued that a modern approach has been adopted in Dharangadhara Ltd. v. State of Saurashtra[i] wherein it was said that the element of control would be subjective to the nature of business. Subsequently, the employment test, an integral part of the business test, etc. has been formulated. However, primarily all such tests determine whether the service is contract of service or contract for service. As discussed above, it cannot be determined in the case of AI. This simply calls for a re-definition of the ingredients of the doctrine.


Product Liability

Upon the event of treating AI as a product, the doctrine of product liability can be attracted. Under this doctrine, the sellers/agencies, as well as the manufacturers who are involved in making the product available in the market, would be held accountable and liable for the injuries caused by the product.


The difficulty with the application of this doctrine is that different institutions and organizations would be responsible for the development of different parts of AI, that together help in the synthesis of the intelligence.[ii] For example, a software engineer, hardware engineer, data scientist, programmers, etc. If the anomaly is due to input-data, software, or the hardware, the suit against corporation, software-engineers, and manufacturers would be attracted, respectively. Since it will be impossible to determine the part of the AI that caused the wrongful act again due to its Black-Box Algorithm, the only viable way would be to allow distributive liability. Under this doctrine every person/institute/organization involved in making of the machine, from manufacturing to software-designing to hardware-making would be liable. However, for one wrongful act, several institutions will be responsible to compensate for the loss. This will intensify legal complication.


Through the aforementioned reasoning, the authors believe that it is safe to say that the new technology will amplify the pace and precision of corporate governance by multiple folds. Nevertheless, the laws in place are not concrete enough to place the liabilities or confer rights concerning the subject matter of AI.


Suggestions

A few suggestions are enlisted as –

1. Allowing Personhood To AI

  1. If any crime is committed by AI, then to determine its civil and criminal liability, it is essential to accord the status of a legal person to AI. To accord legal personhood, an entity needs to be vested with various rights and obligations.[iii] The essentials of a Juristic person are –

1. An entity which capable of holding rights and duties, legally.

2. Lacks ability of thinking.

3. Functioning of these entities is done with the help of natural persons.

In India, by way of various landmark judgments, Corporations[iv], deities,[v] and environment[vi] have been given with the status of Juristic persons. The rights vested with the Corporations and Idols are exercised by the BOD and the Managing Boards/Trusts, respectively. The same rationale can be applied in AI as well wherein the Corporations using AI/users of the AI can act as trustees/agents of it.

  1. Excluding the deep learning variant, where the AI can learn and make decisions on its own, the assistive data driven AI fulfills all the essentials of a juristic person. It does not have its own power of thought and is controlled by either its developers or the users. Therefore, the assistive data driven AI should be accorded with the status of Juristic Person under Law.

  2. Saudi Arabia serves as a precedent for the same as it became the first country to give citizenship to a robot named Sophia. She would be accorded the same legal and social rights as any other citizen.

2. Two-Tier System

  1. This suggestion can be implemented by all kinds of corporations. The companies can have a two-tier board system wherein the lower-tier i.e., the management tier can be taken over by the AI. Under the procedure of this tier, all the decision-making processes will be carried out by the AI. These decisions will have to be transferred to the upper-tier – the Supervisory and Approval tier.

  2. The Supervisory and Approval tier will be responsible to check and approve the decisions transferred from the Management tier. These decisions will be incorporated only after due diligence carried out by the human-navigated upper tier.

  3. This system will also be accommodating at the instance of criminal liability. It is so because one of the essentials of a criminal act is mens rea. It is not feasible to identify mens rea of a machine. Therefore, through the doctrine of corporate veil, mens rea can be identified by following the two-tier board system.

3. Criminal Liability Of AI

There are three essentials of criminal liability-

1. Actus Reus

2. Mens Rea

3. Strict Liability Offences (it is not required to prove mens rea)


Gabriel Hallevy models of criminal liability to AI in Indian context

Three ways are suggested in which AI can be accorded criminal liability. Taking the example of the crime of Murder:


Principle of Perpetrator-via-another

In this model, crime is committed by the developers through AI. It only acts as an innocent agent, which lacks the mental capacity to determine the consequences of its actions. Therefore, the person instructing the AI should be held criminally liable for Murder as per Section 300, of the Indian Penal Code (“IPC”) as it fulfils the element of Actus Reus and mens rea by knowingly committing a criminal act causing death.


Natural-Probable-Consequence

According to this model, a criminal act is committed by AI because it was wrongly triggered during its natural course of working. Even though, the element of mens rea is missing from this scenario, the Actus reus would still be attracted. Therefore, as per Section 299, IPC, the AI developers and users can be held liable for Culpable Homicide by unintentionally causing death by committing a criminal act.


Direct Liability

This refers to according Actus Reus and mens rea to AI, proving Actus Reus is comparatively easier than mens rea. It is suggested that only under strict liability offences, AI can be accorded with criminal liability where it is not required to prove mens rea. Example: – If an accident is caused by a self-driven car because of over-speeding, then the criminal law applicable on humans in this case should be made applicable on AI.


Conclusion

The discovery of AI is a pioneer of today’s developments. It is an advanced technology that would prove to be very beneficial in corporations at the level of corporate governance. It will surge the precision as well as lessen the time that is consumed in this crucial process. However, there is a substantial legal vacuum as per the prevailing laws.


For competent and effective incorporation of AI at the crucial level of corporate governance, it is imperative for the existing laws to organically evolve and come at par with the advanced technologies.


The authors believe that the solutions can be incorporated by the Legislature and every corporation with the aid of the think-tank of our country – NITI Aayog. As its objectives revolve around fostering corporate federalism.

 

[i] Dharangadhra Chemical Works Ltd. v. State of Saurashtra & Ors., AIR 1957 SC 264.

[ii] A.S. Mittal & Anr. v. State of U.P. & Ors., (1989) 3 SCC 223.

[iii] Shiromani Gurdwara Parbandhak Committee v. Som Nath Dass and Others, (2000) 4 SCC 146.

[iv] Tata Engineering & Locomotive Company Ltd., v. State of Bihar, (1964) 6 SCR 885.

[v] Yogendra Nath Naskar v. Commissioner of Income Tax, (1969) 1 SCC 555.

[vi] Mohd. Salim v. State of Uttarakhand, 2017 (2) RCR (CIVIL) 636.


This article has been authored by Yashaswi Belani, and Jash Botadra, students at Symbiosis Law School, Pune. This article is part of RSRR’s Corporate Governance Blog Series.

Mailing Address

Rajiv Gandhi National University of Law,

Sidhuwal - Bhadson Road, Patiala, Punjab - 147006

Subscribe to RSRR

Thanks for submitting!

Email Us

General Inquiries: rslr@rgnul.ac.in

Submissions: submissionsrslr@rgnul.ac.in

Follow Us

  • LinkedIn
  • X
  • Instagram

Copyright © 2023 RGNUL Student Research Review (RSRR). ISSN: 2349-8293.

bottom of page