top of page
Shriya Singh & Aditi Anup

Resolving the Liability Dilemma in AI Caused Harms

Introduction

The death of a Japanese worker by a robot in 1981 was one of a kind robotics accident that prompted fear that humans shall eventually lose control of their mechanized offsprings. With the potential of acting unpredictably, unexplainably, and autonomously, artificial intelligence systems are increasingly engaged in activities that were unforeseen or unintended by the original programmers, making the liability question a raging concern. The dilemma as to whether these systems can be held accountable for the tortious acts committed by them also persists also referred to as the “Who Pays” principle.  The most obvious solution to this lies in granting a “legal personhood” status to these AI-enabled systems. This was also deliberated upon by the EU Parliament in 2017. While the idea has gained much traction in recent times, the same shall not be explored here. This piece shall, however, primarily focus on the “vicarious liability” (within strict liability) model suggesting that the same is not feasible for fixing liability for harms caused by autonomous technologies. Negligence model should instead be adopted for bridging the responsibility gap arising from the application of the former vicarious liability regime.


Attributing Vicarious Liability in Case of AI-Caused Harm – A Valid Approach?

The legal system assumes significance over the functioning and regulation of autonomous machines that predominate humankind as technology continues to advance in various fields. Consider a scenario where a passer-by gets injured by a computer-operated, unmanned crane which drops a steel frame on the former after incorrectly identifying the drop-off location. Now the question arises whether the computer agent who has replaced the human worker, or the human principal who coded the above mentioned computer agent should be held liable for the injury. Furthermore, if it is valid to hold the human principal liable for harms caused by “computer tortfeasors” and if not, what approach should be taken to impute liability on the latter? On this note, we shall explore the attributing of liability for AI-caused harms within the legal schema.


Currently, machines cannot be sued directly for the harm inflicted by them thus, the injury is traced back to some human principal, that is the person behind coding or implementing the tortious machine. As also iterated by the Second Law of Robotics, some humans can be held responsible for injury inflicted by the AI-based algorithms, thereby demonstrating the application of the vicarious liability principle (“respondent superior” doctrine) within tort law. The said doctrine holds the employer responsible for the autonomous acts of his employees performed within the course of employment even though the former neither immediately influenced the tortious act nor participated in it directly. Similarly, in cases where an intelligent artifact is delegated with the responsibility of decision-making, the likely rule is the imposition of vicarious liability. However, due to the complexity of the programs and the multiplicity of stakeholders involved behind them, there seems to be no obvious human who can be held accountable for algorithmic injuries as also seen in the example where injury was caused by the unmanned crane.


For holding the principal liable for the tortious acts committed by intelligent artifacts with respect to the traditional vicarious liability model, the AI system should be construed as a “legal person” and a “tortfeasor” and fulfil two elemental criteria: i) establish a relationship between principal and the AI tortfeasor to hold one liable for the fault of another and, ii) the existence of a connection between the established relationship and the tortfeasor’s wrongdoing. The liability under the aforesaid doctrine arises in employment relationships, that is, where the agent commits a tort in the course of employment and in certain situations where an independent contractor commits a tort while acting in the discharge of the principal’s “non-delegable duty” to a third person to take care. This method of attributing liability is, however, filled with difficulties. The possibility of holding the principal responsible for the wrongful acts of his artificial agent (“AA”) seems obscure. Moreover, the difficulty of construing the AA as an employee or an agent working in the employment of its human principal also persists for there are no established employment rules pertaining to machines. Another practical difficulty which emanates from this is in identifying the principal where multiple stakeholders are involved. For example, in cases of medical malpractices caused by machines, the dilemma persists if the physician who relied on the AI should be liable or the hospital.


Considering the existing insufficiencies with this liability model, the EU Parliament report suggested the expansion of the notion of vicarious liability in situations where autonomous technologies are used in place of human counterparts. This would be based on the principle of “functional equivalence”, that is, when a harm will be caused by an autonomous technology that is used in a functionally equivalent way to the employment of a human auxiliary, the “operator’s liability for making use of the technology should correspond to the existing vicarious liability regime of a principal for its own auxiliaries”. The issue, however, that arises, in this case, is that while vicarious liability is modeled around human behavior, the role of a technological auxiliary cannot be assessed according to human behavioral standards. R Abbott resolves this issue in the most convincing way by concluding that these automated machines should be assessed in the same manner as their human auxiliaries whereas in cases where the autonomous technology outperforms the human auxiliary, the harm is assessed according to the functioning of a comparable technology available in the market.


Arguments in favor of vicarious liability under the strict liability regime, however, fall short in coping with the advancement of technology. The reason being that if there exists a possibility that computers may behave as a reasonable person in the field of tort law, the question arises whether the strict liability rules be amended. Lastly, what should be the standards according to which the liability of artificial agents be assessed? The answer to these questions shall be studied below.


Defying Vicarious Liability Due to Lack of Agency

As already discussed, the issue with vicarious liability (with respect to AI-caused harms) is that it seeks to hold the principal responsible for the acts of the agent. Now according to Restatement (Third) of Agency, “Agency is the fiduciary relationship that arises when one person (a ‘principal’) manifests assent to another person (an ‘agent’) that the agent shall act on the principal’s behalf and subject to the principal’s control, and the agent manifests assent or otherwise consents so to act.” This definition clearly implies that both the parties should be “persons” capable of forming decisions to form a legally binding relationship between them. Applying this agency concept to algorithmic injuries within the vicarious liability schema is challenging primarily because it must be established that the agent was an employee working in the course of employment of the principal. It, however, remains questionable if AI systems can be considered employees. Moreover, delegating responsibility under the previously mentioned regime requires establishing a link between the harm caused by the employee and that it was caused while in the course of employment. Due to this reason, the third parties do not have the burden of proving if the robot was behaving under its legal authority. Furthermore, the employers can evade liability in court by claiming that the robot had finished its duties and caused the harm while it was on an activity of its own, for such claims would never be admitted in court. Therefore, the owners/users of these autonomous technologies would have to be strictly liable under vicarious liability. This makes it plausible to conclude that alternative paradigms should be considered while attributing liability for AI-caused harms.


Negligence-Based Approach – An Alternative?

While the traditional liability regime suggests tracing back the liability to a specific person irrespective of negligence on his part, it does not specify which of the multiple stakeholders involved should be held for the harm caused for under whose employment should the autonomous system be considered remains complex and unresolved. This article, therefore, suggests that a negligence-based approach is preferable in deciding the “Who Pays?” dilemma. This approach imputes liability on the party negligently involved with the algorithm.


With respect to the hypothetical posed at the beginning of this piece relating to an unmanned crane, the harm was inflicted by a “computer tortfeasor.” Computer-generated torts are those cases wherein the computer tortfeasor occupies the position of a reasonable person in the negligence regime. This implies that it steps into the shoes of a human worker such that if the computer were a person, it would be liable under negligence according to the reasonable person’s standards. When shifting from strict liability to negligence, the burden of proof lies on the manufacturer to show that the computer tortfeasor is safer on average than a person. Considering another hypothetical situation wherein an AI-driven surgical robot used by a hospital harms a patient not because the hospital erred in fulfilling its duty of care but due to robot malfunction in a way not foreseeable by anyone. Applying the vicarious liability model here, the hospital would be liable for the tortious act and not the manufacturer for the robot was working within the employment of the hospital. However, it is suggested that the machine should be held in negligence if it outperforms human surgeons generally but underperforms in certain areas. Once established that the machine involved is safer than a person, the focus shifts to whether the computer’s act was negligent. When the harm was inflicted by a computer-operated unmanned crane, negligent manufacturer liability would be imputed only if it is safer than a human operator on average.


Additionally, these computer-generated torts could be resolved by introducing a pre-determined responsibility model of “level of care” to be practiced by the designers, manufacturers, and owners of emerging digital technologies. If this level of care is unmet, a presumption of negligence would arise against the defendant thus, triggering liability whereas if the care is met, the burden of proof would shift on the plaintiff to prove negligence.


Conclusion

It is crucial that suitable legal and policy frameworks be put in place to direct the development of technology and to ensure its widespread benefits. In the future decades, as people and machines compete in an expanding diversity of activities, the deliberations in this field will become increasingly significant. The main objective of tort law is to optimize accident deterrence and indemnify victims based on an assessment of all the interests involved. As already discussed, vicarious liability is not feasible in terms of AI caused harms for it would mean establishing a binding relationship between the human principal and the AI system while holding the former responsible for the latter’s tortious act, thus making this form of relationship tenuous with the ever-changing nature of AI. To cope with the mechanisms of evolving dynamics of AI, a negligence-based approach model is advocated wherein the liability is determined by applying the reasonable person test to the facts of the case.

 

This article has been authored by Shriya Singh and Aditi Anup, students at OP Jindal Global University. This blog is a part of RSRR’s Blog Series on “Emerging Technologies: Addressing Issues of Law and Policy”, in collaboration with Ikigai Law.

留言


bottom of page