top of page
  • Ratul Roshan & Ridhi Gupta

The F-A-T Debate in Indian AI Regulation

Introduction and Structure

One of the earliest documents which laid out the Indian government’s plans in relation to AI was the 2018 Report of the DPIIT’s Artificial Intelligence Task Force. The report noted that India’s AI vision should prioritise problem solving over economic growth. This seems to be the common thread running across all subsequent Indian AI policy documents – ensuring the ethical application of AI for problem solving. But how ethical AI developed?


This article crystallises observations from various important Indian policy documents on how AI regulation is likely to pan out in India to ensure the development of ethical AI. It first cursorily explores which government body heads the charge on AI regulation in India, after which  it discusses the fairness, accountability, transparency or ‘FAT’ debate, which are the three guiding principles for the development of ethical AI, not only in India but globally. Lastly, the article  examines the various alternatives proposed to imbue FAT in AI systems. To conclude, the author will posit his opinions on which of these alternatives are the best suited to achieve India’s dream of becoming a one trillion dollar digital economy by 2025, and an AI garage for 40% of the world’s population.


Whose Turf is it Anyway?

Many ministries and government organisations have shown interest in taking the conversation around AI regulation forward. These include the Ministry of Electronics and Information Technology (“MeitY”) (here), the Ministry of Commerce and Industry (here), the Department of Telecommunications (which functions under the Ministry of Communications) (here and as recently as September 2020 here), and the NITI Aayog (here and here). Each ministry/organisation has undertaken public consultations or released guidelines tailored to the needs of its specific sector.


This has two implications. First, it indicates that AI regulation is likely to be sector driven. In fact, this is something which the NITI Aayog’s latest document the ‘Draft Document – Towards Responsible #AIforAll’ ( ‘AIforAll draft’) also recognises; it notes that AI regulation in several other jurisdictions has followed roughly a three tiered structure – (a) overarching guidelines or regulations established specifically for AI, (b) sector specific regulations which apply to AI, and (c) sector agnostic laws which may apply to AI[i]. The top-level consequences of such sectoral regulation include possible regulatory conflict, but on the flipside, tailored sectoral regulation.

The second implication deals with identifying the entity which will lead the charge on AI regulation, standard setting, infrastructure development etc. in India. Currently, the two entities most active in this segment are the NITI Aayog and the MeitY, and both have released multiple policy documents with pan-India and pan-sector applicability  aimed at creating a national AI mission. Interestingly, the NITI Aayog, being only a government think tank, is not empowered to draft regulations for AI in India, but it may be able to influence certain standards in the AI ecosystem (much like it has been able to do with regards to the National Health Stack and DEPA, which is part of the India Stack architecture), to the MeitY’s chagrin. In 2018, the NITI Aayog released an AI strategy for India (here) (‘AI Strategy Document’), in which it proposed a two-tiered structure to further India’s AI research aspirations comprising (a) Centres of Research Excellence or COREs, focused “pushing technology frontiers through creation of new knowledge” and (b) International Centers of Transformational AI or ICTAIs, focused on “developing and deploying application-based research…with private sector collaboration[ii] and a dedicated cloud infrastructure for AI called AIRAWAT[iii]. Soon after, the NITI Aayog sought INR 7,000 crores from the government to establish an Indian AI Mission, which created some discomfort within the MeitY, which was drafting its own plan for a INR 400 crores AI programme for India. These concerns were exacerbated in September 2019, after reports emerged that the finance ministry had cleared the NITI Aayog’s INR 7,500 crore proposal to set-up an AI framework for India over the subsequent three years. To address this, in mid-October 2019, the government reportedly formed a committee to (i) resolve overlaps between MeitY and NITI Aayog’s AI plans; and (ii) specify roles of different agencies to fast-track implementation of the AI mission. The committee, which has representation from the Department of Science and Technology, MeitY and NITI Aayog, is yet to come out with its report, leading to growing regulatory uncertainty.


FAT Machine Learning – The Way Ahead

Any person interested in AI regulation, is sure to have run into the acronym FAT or FATE. This stands for Fairness, Accountability and Transparency (and Ethics) in AI, which embodies the understanding that any regulation around AI would be targeted to ensuring these three elements, such that AI-based solutions can be deployed in a safe, responsible, accountable, and ethical manner.


Fairness

Fairness means that an AI-based solution should not be biased towards or against a certain segment of the population. Since data sets used to train AI are picked up en masse and are not usually manually collated, they are rarely truly neutral. This has been observed in multiple government documents[iv] on AI, most recently in the AIforAll draft.


Here’s an example – COMPAS (‘Correctional Offender Management Profiling for Alternative Sanctions’) is an algorithm used by US state courts to predict the likeliness of a criminal reoffending, and acts as a guide for criminal sentencing. On scrutiny, it was found that the algorithm overwhelmingly and inaccurately predicted that black defendants are more likely to reoffend. This was perhaps because it was trained on historical criminal statistics in the US, which had an overrepresentation of black criminals.


These problems are exacerbated further because technology cannot, by itself, delineate whether it is appropriate to rely on a given trait or not – for instance, while gender information may be useful to diagnose breast or prostate cancer, it may not be relevant to assessing potential hires, but an algorithm does not know that.[v]


Proposed Regulatory Approaches

Government policy documents have explored myriad means to reduce bias and ensure fairness. The NITI Aayog earlier argued for a reactive approach which would identify biases in AI solutions post-facto and find ways to reduce them[vi], till reliable AI data feeding/training solutions were developed.[vii] Recently the NITI Aayog’s AIforAll draft proposed technical solutions to ensure fairness in AI training data sets.[viii] Some examples include tools like IBM ‘AI Fairness 360’, which is an open source software toolkit that can help in detecting biases in AI by keeping a regular check on them through its state-of-the-art algorithms, Google’s ‘What-If’ Tool, which will allow users to test the machine learning models without the need for writing any code, Fairlearn, a tool to assist data scientists and developers in assessing and improving AI and open source frameworks such as FairML, a toolbox written in python for the purpose of auditing the machine learning models.[ix]


A MeitY committee has sided with human intervention to curate data sets to achieve fairness[x] which is aligned to the government’s push for light-touch regulation/self-regulation of AI in India. In line with the same, it has also proposed stakeholder driven testing and formalisation of best practices through self-regulatory bodies, as opposed to hard government intervention or intrusive checks into the algorithms of various AI solutions.[xi]


Accountability

The discourse around accountability seeks to determine whom to fix liability on when the (mis/)application of an AI solution causes loss or harm to someone.[xii] Who do you hold responsible if automated trades made by AI makes cause you losses in the millions? Or if a self-driving car is in an accident claiming a life? These problems are magnified in the case of self-learning AI, which selects and learns from databases it shortlists itself[xiii]. Also, inadequate consequences reduce the incentive for responsible AI development,[xiv] and hinder effective grievance redressal.[xv]


Proposed Regulatory Approaches

Many innovative architectures have been proposed to split liability for AI solutions, but, per the NITI Aayog, the overarching approach in this regard needs to shift from ascertaining and fixing liability to objectively identifying the fault and preventing a repetition[xvi]. To this end , it also proposed (a) moving from a standard of ‘negligence’ from ‘strict liability’ to ascertain liability[xvii]; (b) the introduction of safe harbours to insulate AI solutions if appropriate steps to design, test, monitor, and improve the AI product were taken by the developer[xviii]; (c) creating a framework to split any liability proportionately amongst stakeholders[xix]; and (d) introducing an actual harm policy so that a lawsuit cannot be instituted only on a speculative damage or a fear of future damages[xx].


In the past, a MeitY committee proposed devising frameworks to apportion liability between designers and deployers[xxi]. As a guiding principle, it has also recommended drafting flexible principles to guide self-regulatory efforts which can identify appropriate accountability mechanisms.[xxii] Furthermore, it noted that as AI systems developed to take independent decisions with respect to its training, it should also be explored whether they should be recognized as legal persons.[xxiii] This would allow them to hold their own insurance and compensation fund to compensate for any damages.[xxiv]


Contrary to this, the Indian Society of Artificial Intelligence and Law proposed fixing liability on developers for harms caused by AI/ML solutions,[xxv] however, there seems to be little backing for this approach. Overall, there seems to be little consensus amongst stakeholders, and between the MeitY and the NITI Aayog on which of these alternative routes to tread, so we may see substantial discussion on this going forward.


Transparency

Transparency in AI entails that stakeholders should be able to ascertain how an AI system produced an output from a given input, somewhat like a window into the inner workings of an AI solution. Since AI solutions use multiple (and evolving) parameters to function, it may sometimes become impossible to unpack how a particular result was produced by the algorithm.[xxvi] With self-learning and ‘deep learning’ solutions, this may lead to a black-box like phenomenon, where the user cannot see the inputs and operations of the AI system.[xxvii] Not only does this reduce trust in the algorithm, it also hurts its adoption, impedes auditing, and makes debugging and compliance with sectoral regulations difficult [xxviii].


A popular example is IBM’s AI medical diagnosis solution ‘Watson’,[xxix] which performed well in controlled trials, but was not able to make accurate predications in the real world.[xxx] Since developers and doctors alike could not understand how a particular diagnosis was reached, they could not debug the product, which impacted its uptake.[xxxi]


Proposed Regulatory Approaches

There appears to be some consensus between the MeitY[xxxii] and the NITI Aayog[xxxiii] that the solution to the black box phenomenon is not opening up the relevant AI solution’s code to public scrutiny, but to enhance ‘explainability’ of the algorithm (sometimes also called ‘XAI’ for explainable AI[xxxiv]). XAI seeks to design AI solutions in a manner which makes it easier to understand how a particular result was achieved – there are multiple approaches to make AI explainable, which has not been examined here (read more here and here).


Some specific proposals of the NITI Aayog include leveraging pre hoc analysis (like Exploratory Data Analysis, concept extraction, dataset summarization, and distillation techniques) and post hoc interpretation (like input attribution – LIME (Local Interpretable Model-Agnostic Explanations), SHAP (SHapley Addictive exPlanations), DeepLift and example influence matching – MMD critic, influence function, etc.) of AI responses.


A MeitY committee recommended that the government could specify particular AI applications which require algorithm explainability to mitigate discrimination and harm to individuals.[xxxv]


Conclusion

Substantial work remains to be done in the field of AI. Any government led hard regulation is only likely to fetter innovation in this field. Fortunately, both the Niti Aayog and MeitY have clarified that they are unlikely to mandate algorithm code sharing to achieve FAT in AI systems, which comes as some respite to innovators in the field. Industry responses to the personal data regulation and non-personal data regulation are abound with arguments that mandatory data sharing, data access, and hard regulation will impact innovation detrimentally. In light of the same, the MeitY’s approach, which tends to self-regulation, stakeholder-led development of shared standards, and an overall light touch regulatory approach is commendable and should be maintained. Further, any form of regulation must also promote private sector involvement which would ensure the development of neutral, efficient and up-to-date technical solutions to some of the problems highlighted above. This is because private sector participants are in the weeds of things, which not only gives them visibility into practical problems with algorithms, but also presents them with an inherent incentive to solve for these problems to increase the credibility of their offerings and their brand. Further, they have the human capital to achieve this. AI regulation in India is only in its nascent stages at present, and it will be interesting to see how it develops going forward.

 

[i] Pages 16 through 21, AIforAll Draft, available at https://niti.gov.in/sites/default/files/2020-07/Responsible-AI.pdf.

[iii] Box 15, AI Strategy Document.

[iv] These include the AI Strategy Document (Page 85), MeitY’s Report of Committee – A on Platforms and Data on Artificial Intelligence (Page 12) available at https://www.meity.gov.in/writereaddata/files/Committes_A-Report_on_Platforms.pdf, MeitY’s Report of Committee – C on Mapping Technological Capabilities, Key Policy Enablers Required across Sectors, Skilling And Re-Skilling, R&D (Page 44), available at https://www.meity.gov.in/writereaddata/files/Committes_C-Report-on_RnD.pdf, and MeitY’s Report of Committee – D On Cyber Security, Safety, Legal and Ethical Issues (Page 24) available at https://www.meity.gov.in/writereaddata/files/Committes_D-Cyber-n-Legal-and-Ethical.pdf, amongst others.

[v] Page 24, MeitY’s Report of Committee – D On Cyber Security, Safety, Legal and Ethical Issues.

[vi] Page 85, AI Strategy Document.

[vii] Page 85, AI Strategy Document.

[viii] Page 22, AIforAll Draft.

[ix] Page 22, AIforAll Draft.

[x] Page 44, MeitY’s Report of Committee – C on Mapping Technological Capabilities, Key Policy Enablers Required across Sectors, Skilling And Re-Skilling, R&D.

[xi] Page 29, MeitY’s Report of Committee – D On Cyber Security, Safety, Legal and Ethical Issues.

[xii] Page 88, AI Strategy Document.

[xiii] Page 12, AIforAll Draft.

[xiv] Page 12, AIforAll Draft.

[xv] Page 12, AIforAll Draft.

[xvi] Page 88, AI Strategy Document.

[xvii] Page 88, AI Strategy Document.

[xviii] Page 88, AI Strategy Document.

[xix] Page 88, AI Strategy Document.

[xx] Page 89, AI Strategy Document.

[xxi] Page 27, MeitY’s Report of Committee – D On Cyber Security, Safety, Legal and Ethical Issues.

[xxii] Page 31, MeitY’s Report of Committee – D On Cyber Security, Safety, Legal and Ethical Issues.

[xxiii] Page 37, MeitY’s Report of Committee – D On Cyber Security, Safety, Legal and Ethical Issues.

[xxiv] Page 37, MeitY’s Report of Committee – D On Cyber Security, Safety, Legal and Ethical Issues.

[xxv] Page 21, Indian Strategy for AI and Law, 2020, Indian Society of Artificial Intelligence and Law.

[xxvi] Page 43, MeitY’s Report of Committee – C on Mapping Technological Capabilities, Key Policy Enablers Required across Sectors, Skilling And Re-Skilling, R&D

[xxvii] Page 9, AIforAll Draft

[xxviii] Page 9, AIforAll Draft

[xxxi] Page 9, AIforAll Draft

[xxxii] Page 26, MeitY’s Report of Committee – C on Mapping Technological Capabilities, Key Policy Enablers Required across Sectors, Skilling And Re-Skilling, R&D

[xxxiii] Page 86, AI Strategy Document

[xxxiv] Page 86, AI Strategy Document

[xxxv] Page 44, MeitY’s Report of Committee – C on Mapping Technological Capabilities, Key Policy Enablers Required across Sectors, Skilling And Re-Skilling, R&D


Authored by Mr. Ratul RoshanAssociate at Ikigai Law. He was assisted by Ms. Ridhi Gupta, student of RGNUL, Punjab. She has been credited as a co-author for this article by the author. This blog is a part of the RSRR Blog Series on Artificial Intelligence, in collaboration with Mishi Choudhary & Associates.

Mailing Address

Rajiv Gandhi National University of Law,

Sidhuwal - Bhadson Road, Patiala, Punjab - 147006

Subscribe to RSRR

Thanks for submitting!

Email Us

General Inquiries: rslr@rgnul.ac.in

Submissions: submissionsrslr@rgnul.ac.in

Follow Us

  • LinkedIn
  • X
  • Instagram

Copyright © 2023 RGNUL Student Research Review (RSRR). ISSN: 2349-8293.

bottom of page