The Role Of Legislation In The Regulation Of Artificial Intelligence … – Mondaq News Alerts

To print this article, all you need is to be registered or login on Mondaq.com.

The development and adoption of Artificial Intelligence("AI") has seen a global surge in recentyears. It is estimated that AI has the potential to add USD 957billion, or 15 per cent of the current gross value added toIndia's economy in 2035. It is projected that the AI softwaremarket will reach USD 126 billion by 2025, up from USD 10.1 billionin 2018. There is an increased application of AI to a variety ofprivate and public use, and it is expected that AI usage willbecome ingrained and integrated with society.1

In India, large-scale applications of AI are being implementedand used across various sectors such as healthcare, agriculture,and education to improve the potential in these sectors. InFebruary 2021, the NITI Udyog released the approach document, proposing principles for'responsible AI' development ("ApproachDocument").

AI is set to be a "defining future technology"; butwhat exactly is AI, and what are the challenges and considerationsfor regulating AI?

The scope of this article is to examine the challenges andconsiderations in the regulation of AI in India. We have alsoexamined the approach for the regulation of AI in other developedjurisdictions such as the European Union and the United States.This article has relied on the Approach Document to understand thesystems considerations and societal considerations which come upfrom the implementation of AI into technology and society. The AIconsidered here is 'Narrow AI', which is a broad term givento AI systems that are designed to solve specific challenges thatwould ordinarily require domain experts. Broader ethicalimplications of 'Artificial General Intelligence' (AGI) or'Artificial Super Intelligence' (ASI) are not considered inthis article. Further, systems considerations considered in thisdocument mainly arise from decisions taken byalgorithms2.

The Approach Document describes "ArtificialIntelligence" as "a constellation oftechnologies that enable machines to act with higher levels ofintelligence and emulate the human capabilities of sense,comprehend and act. Computer vision and audio processing canactively perceive the world around them by acquiring and processingimages, sound, and speech. Natural language processing andinference engines can enable AI systems to analyse and understandthe information collected. An AI system can also take decisionsthrough inference engines or undertake actions in the physicalworld. These capabilities are augmented by the ability to learnfrom experience and keep adapting overtime"3.

The integration of AI into technology and society gives rise tounique challenges. Further, as AI becomes more sophisticated andautonomous, concerns with respect to accountability, bias, andsocietal well-being may arise.

The following main considerations can be identified whileimplementing AI i.e. (i) systems considerations and (ii) societalconsiderations4. We further analyse the regulatoryimplications stemming from such considerations.

(i) System Considerations: Systemsconsiderations are implications that have direct impacts oncitizens (or primary 'affected stakeholders') being subjectto decisions of a specific AI system. These typically result fromsystem design choices, development, and deploymentpractices5.

Some of the system considerations are:

(a) Potential for bias: Though automated solutions areoften expected to introduce objectivity to decision-making, recentcases globally have shown that AI solutions have the potential tobe 'biased' have been identified and tend to be'unfair' for certain groups (across religions, race, caste,gender, genetic diversity. The emergence of bias in AI solutions isattributed to several factors arising from various decisions takenacross different stages of the lifecycle and the environment inwhich the system learns. The performance of the AI solution islargely dictated by the rules defined by its developers. Responsesgenerated by AI solutions are limited by the data set on which itis trained. Hence, if the data set includes biased information,naturally, the responses generated will reflect the same bias.While this is not intentional, it is inevitable, since no data setcould be free from all forms of bias. The AI solution cannotcritically examine the data set it is trained on, since it lackscomprehension and is hence incapable of eliminating the biaswithout some form of human intervention.

Bias is a serious threat in modern societies. We cannot,therefore, risk developing AI systems with inbuilt biases. The roleof regulation in this regard would be to specify and penalize anydevelopment of AI with such biases. Regulation must also prescribethat developers should invest in research and development of biasdetection and mitigation and incorporate techniques in AI to ensurefair and unbiased outcomes. Legislation must further provide forpenalties on developers developing AI with biased outcomes.

(b) Accountability of AI decisions: In the development ofAI, it is understood that different entities may be involved ineach step of the development and deployment process. Differententities associated with complex computer systems make it difficultto assign responsibility for accountability and legal recourse.

Since there are many individuals or entities involved in thedevelopment of AI systems, assigning responsibility oraccountability, or identifying the individual or entity responsiblefor a particular malfunction may be difficult. Consequently,pursuing legal recourse for the harm caused is a challenge.Traditional legal systems allocate responsibilities for action andconsequences to a human agent. In the absence of a human agent, itis essential for regulation to find a methodology for identifyingor determining the individual or entity involved. All stakeholdersinvolved in the design, development and deployment of the AIsystems must specifically be responsible for their respectiveactions. The imposition of such obligations can be achieved throughregulation.

(ii) Societal considerations: Societalconsiderations are implications caused due to the overalldeployment of AI solutions in society. This has potentialrepercussions on society beyond the stakeholder directlyinteracting with the system. Such considerations may require policyinitiatives by the Government6.

One of the societal considerations is "impact on thejob". The rapid rise of AI has led to the automation ofseveral routine job functions and has consequently led tolarge-scale layoffs and job losses. The use of AI in the workplaceis expected to result in the elimination of a large number of jobsin the future as well.

Regulation through appropriate provisions in labour oremployment law legislation can in this regard check and ensure thatwork functions are not arbitrarily replaced by AI. It is wellunderstood that corporations are driven by profit and hence AI maybe a cost-effective option. Nevertheless, it is possible toregulate through legislation any such replacement of human jobs byAI in the larger interests of society.

Currently, India does not have codified laws, statutory rules orregulations that specifically regulate the use of AI. Establishinga framework to regulate AI would be crucial for guiding variousstakeholders in the responsible management of AI in India.

There are certain sector-specific frameworks that have beenidentified for the development and use of AI.7 In thefinance sector, SEBI issued a circular in Jan 2019 to Stockbrokers,Depository Participants, Recognized Stock Exchanges andDepositories on reporting requirements for Artificial Intelligence(AI) and Machine Learning (ML) applications and systems offered andused.

In the health sector, the strategy for National Digital HealthMission (NDHM) identifies the need for the creation of guidance andstandards to ensure the reliability of AI systems in health.

Recently on June 9, 2023, the Ministry of Electronics andInformation Technology (MEITY), suggested that AI may be regulatedin India just like any other emerging technology (to protectdigital users from harm). MEITY mentioned that the purported threatof AI replacing jobs is not imminent because present-day systemsare task-oriented, are not sophisticated enough and do not havehuman reasoning and logic8.

The European Union: In April 2021, the EuropeanCommission proposed the first European Union ("EU")regulatory framework for artificial intelligence9("AI Act").

The AI Act defines an "artificial intelligence system (AIsystem)" as a "machine-based system that is designedto operate with varying levels of autonomy and that can, forexplicit or implicit objectives, generate outputs such aspredictions, recommendations, or decisions that influence physicalor virtual environments"10. The AI Act wouldregulate all automated technology. It defines AI systems to includea wide range of automated decision-makers, such as algorithms,machine learning tools, and logic tools.

This is the first comprehensive regulatory framework forregulating AI and is part of the EU's strategy to set worldwidestandards for technology regulation. Recently, on June 14, 2023,the European Parliament approved its negotiating position on theproposed AI Act between the representatives of the EuropeanParliament, the Council of the European Union and the EuropeanCommission for the final shape of the law. The aim is to reach anagreement by the end of this year11. The second half of2024 is the earliest time the regulation could become applicable tooperators with the standards ready and the first conformityassessments carried out12.

The AI Act aims to ensure that AI systems used in the EU marketare safe and respect existing laws on fundamental rights and EUvalues. The AI Act proposes a risk-based approach to guide the useof AI in both the private and public sectors. The AI Act definesthree risk categories: unacceptable risk applications, high-riskapplications, and applications not explicitly banned. Theregulation prohibits the use of AI in critical services that couldthreaten livelihoods or encourage destructive behaviour but allowsthe technology to be used in other sensitive sectors, such ashealth, with maximum safety and efficacy checks. The AI Act wouldapply primarily to providers of AI systems established within theEU or in a third country placing AI systems on the EU market orputting them into service in the EU, as well as to users of AIsystems located in the EU.

The United States: As per press reports, in ameeting with President Biden at the White House, seven leadingartificial intelligence companies including Google, Meta, OpenAIand Microsoft agreed to a series of voluntary safeguardsthat are designed to help manage the societal risks of AI andresultant emerging technology. The measures, which includeindependent security testing and public reporting of capabilities,were prompted by some experts' recent warnings about A.I. TheU.S. is at the commencement of what is expected to be only thebeginning of a long and difficult path toward the creation of rulesto govern an industry that is advancing faster than lawmakers typicallyoperate.

AI is growing at a fast pace and is rapidly being integratedinto society. There is therefore definitely a need to regulate AIto prevent system and societal risks. There are several challengesin regulating AI, making the task seem impossible to achieve.Traditionally as well, the law has not been able to keep up withnew technologies. However, if the regulators work at understandingthe technology involved in AI and the system and societalconsiderations, comprehensive and effective legislation on AI maybe created. India may also draw inspiration from the legislation inthe EU in this regard. Legislation has thus a key role to play inensuring effective and fair implementation of AI in society andtechnology.

Footnotes

1. Approach Document Page 6.

2. Approach Document Page 7.

3. Approach Document Page 7.

4. Approach Document Page 8.

5. Approach Document Page 9.

6. Approach Document Page 9

7. https://www.sebi.gov.in/legal/circulars/jan-2019/reporting-for-artificial-intelligence-ai-and-machine-learning-ml-applications-and-systems-offered-and-used-by-market-infrastructure-institutions-miis-_41927.html

8. https://www.livemint.com/ai/artificial-intelligence/india-will-regulate-ai-to-ensure-user-protection-11686318485631.html

9. https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.

10. Art. 3 No. 1 of the AIAct.

11. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

12. https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Here is the original post:

The Role Of Legislation In The Regulation Of Artificial Intelligence ... - Mondaq News Alerts

Related Posts

Comments are closed.