top of page
Ameen Jauhar & Aditya Vyas

Can Absolute Liability Standards be Applied to Social Media Platforms?

Prefatory Remarks

Social media has increasingly become a volatile and highly acerbic medium for information exchange. While the problem of disinformation is not new, social media platforms have afforded an unprecedented level of access to individuals and institutions indulging in such information operations. Be it private, government-sponsored, or independent entities, social media ensures a hitherto unmatched level of dissemination of information, behind the anonymity of online personas. Information operations, defined as the dissemination of false information, with a malicious intent to steer the public discourse, have been rampant on most popular platforms (like Facebook and Twitter). Confronting, and effectively curbing such abuse of the platforms’ ubiquitous nature, while ensuring a balance with free speech and expression, has become a nightmarish conundrum for legislators, policymakers, and researchers across jurisdictions.


Social media giants, while not by-standing as mute spectators, have also shown only limited initiative in curbing the abuse of their platforms. There are “community standards” and guidelines which are often cited as the adequate self-regulation, but there is documented evidence which impugns the actual effectiveness of these measures. Furthermore, with the growing ease and sophistication with which bot accounts are being deployed for information operations, there is a real question as to how far these manual content moderation tactics can succeed.


One school of thought, having found some actual translation into local laws, is holding these social media companies more stringently responsible for the information they allow to flow on their platforms after a specific period of time, despite takedown notices from the appropriate authorities. Countries are enacting specific legislation which assigns a higher degree of responsibility on social media companies to be more innovative and prudent in their self-regulation, failing which penalties can be imposed.


It is in this background that we may want to consider applying “absolute liability” standards to social media platforms, based on the recorded disastrous aftermath of numerous information operations which have already surfaced. Before venturing further into how these standards would translate into the context of platform regulation, we delve a bit into what absolute liability is, its core elements, and how, the regulation of social media platforms arguably befits the use of this doctrine.


Absolute Liability – The Jurisprudential Evolution of this Common Law Principle

In a landmark judgment in the latter half of the nineteenth century, the doctrine of strict liability was crafted. Rylands v. Fletcher is an English case practically featuring in almost every book of Tort Law, and was the seminal judgment which established the elements of a more stringent form of liability. The case talked about “a dangerous thing” which is brought to one’s land, and should that thing escape and cause harm, the person bringing and keeping it would be strictly liable, despite taking the necessary precautions. Act of god, third party actions, and contributory negligence were limited exceptions to this higher degree of liability.


Since its initial conceptualization, the doctrine of strict liability has undergone significant changes. Prominently, in the Indian context, an even more stringent standard of “absolute liability” (also known as no-fault liability) was established in the Oleum Gas Leak case. Under this, hazardous and dangerous activities undertaken by commercial enterprises can be held liable for damage caused, even in the absence of any culpability. This absolute, and no-fault manner of liability, is attributable to two key reasons. First, the enterprise knowingly and intentionally engages is a dangerous activity; and second, this activity is conducted for lucrative financial gains and profiteering.


Using these key elements of “absolute liability” we will now examine their applicability to social media platforms as regulatory frameworks.


Are Social Media Platforms an Inherently Dangerous Commercial Enterprise?

Social media platforms have become real challengers to conventional media (both in print and mass media formats), and are increasingly becoming the dominant source for information on politics, global issues, and even medical inputs (in the time of an ongoing pandemic). Having such a dominant and pervasive influence in the lives of their users, there is a real risk in the spread of false information, whether done so with malicious intent, or not. For instance, a recent study exposed how the COVID-19 pandemic has become accompanied with an equally virulent infodemic, which has led to actual loss of human life due to the spread of false information and rumours regarding this global catastrophe. The bottom line is that social media is hardly an innocuous space on the internet; it has instead, far outgrown its own intended purposes and morphed into a much more complex beast.


In light of this menace, it is arguable that the business enterprise of social media companies is inherently designed to the principle of “high risk, high reward”. However, while the risk is borne by the larger community which is often targeted through these information operations, the reward is reaped by these tech giants controlling these platforms. The revenue generation modus operandi for most of these platforms is inextricably linked to their increasing role as information disseminators. This nexus, and its implications on the unbridled spread of false information rampantly on these platforms, is discussed in the next section.


Are Social Media Platforms a Lucrative Commercial Enterprise?

Filings of Facebook and Twitter showed how revenue is primarily generated through advertisements, and viewership of specific content on their respective platforms. When it comes to advertisements, there has been an increasing amount of concern being expressed against the veracity of such content, especially political advertisements. In fact, in a formal hearing in the US Congress, Representative Alexandria Occasio Cortez categorically raised the problems of falsehoods and political smear campaigns that are orchestrated through political advertisements, bought and paid for by different vested interests.


In addition to political advertisements, even normal posts can become lucrative revenue sources, depending on their ability to attract viewership. As a recent article, critiquing this aspect of social media platforms, expresses – “Incentivizing enticing content is baked into the business models… the platforms organically generate an enabling environment for alarming, sensationalist media designed to trigger clickbait behavior.” The objective of such financial models is to emphasise on the saleability of content, which takes precedence over its veracity and public ramifications.


In fact, if anything, it can be argued that both these models for revenue generation rely on sensationalism and often false or misleading information. Thus, there is an innate nexus between the dangers and risks affiliated to running such platforms, and the monies they accumulate for their controlling companies. Given their direct and commensurate linkage to the revenue inflow, it is only logical that the tech companies are reluctant in meaningfully regulating these advertisements; a concern that has repeatedly featured in the criticism of these companies. This is precisely why in more recent times, some corporations have taken down their advertisements as a decisive sign of protest against the insufficient content moderation by these platforms to curb the spread of mis/disinformation.


Making a Case for Applying “Absolute Liability” to Social Media Platforms

From the preceding sections, it becomes a compelling argument that the social media giants have evolved from their rudimentary models of social exchange of thoughts and pictures. The aforementioned sections reveal the volatile and risky nature of social media platforms. They further establish the real financial motive for the controlling tech companies to leave their content only tokenistically moderated. In this garb of self-regulation, these companies have applied some moderation standards, but the sheer volume of content, and the lack of familiarity with local and subjective contexts, often hampers their effectiveness.


Therefore, while more robust frameworks for regulation are in the pipeline, it may be viable to impose an absolute liability on these social media platforms for the interim. In proven instances of actual public loss (be it the meddling in a national election, as in the United States, or incitement of communal violence), these companies should not be allowed to wriggle out claiming a neutral intermediary status. An exemplary and punitive financial disincentive can be the necessary regulatory nudge needed for better initiatives to be developed for self-regulation within this sector.

 

Authored by Mr. Ameen Jauhar, Senior Resident Fellow at Vidhi Centre for Legal Policy. He was assisted by Mr. Aditya Vyas, student of RGNUL, Punjab. This blog is a part of the RSRR Blog Series on Artificial Intelligence, in collaboration with Mishi Choudhary & Associates.

Comments


bottom of page