Opinion | Artificial Intelligence Whither India? – News18

Once the preserve of lawyers, computer programmers or specialists in digitalisation, Artificial Intelligence (AI) and questions surrounding its future governance, in recent weeks, have risen to the forefront of the international political agenda. AI is, evidently, a transformative technology with great promise for our societies, offering unparalleled opportunities to increase prosperity and equity. But along with its growing pervasiveness, concerns are growing about its potential risks and threats, and this dawning realisation brings in its train reflections on whether, and if so, how to regulate it in the public interest.

Any meaningful discussion of AI and the scope for its regulation has to begin, of course, with finding a common understanding of the term. The Organisation for Economic Co-operation and Development (OECD), an intergovernmental organisation, has established the first principlesonAIwhich reflect its own founding principles based on trust, equality, democratic values and human rights. It defines AI as a machine-based system capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives."

But the OECD is not alone in seeking to define AI. We are currently witnessing the emergence of numerous and at least potentially competing definitions of AI, a technology that is developing and morphing so fast that it may defy a static definition, which of course poses problems for regulators but also for businesses. Some of the definitions are technologically neutral, while others are what one might describe as more value-based.

In all likelihood, value-based definitions are going to remain more fit for purpose since they rely less on the current state of technological development of AI but instead introduce the political and ethical framework which is now driving the concerns of governments and regulators. The international community is increasingly questioning the role of AI in our society not only from a technological and economic perspective but also from a moral one. Is AI the Good Angel of fairy tales? Or is it as some would have it Frankensteins Monster? In short, the global community is becoming more aware of the risks attached to AI and not just the opportunities it presents. Some salutary illustrations of this:

First, an open letter from a large group of eminent scientists this March calling for all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening no one - not even their creators - can understand, predict, or reliably control," the letter read.

This quite drastic warning sound was more recently picked up by Professor Geoffrey Hinton, the man widely credited as the Godfather of AI." Hinton, in an interview in the New York Times in early June, referred to the risk that the race between Microsoft and Google would push forward the development of AI without appropriate guardrails in place. He said, We need to find a way to control artificial intelligence before its too late. The world shouldnt wait for AI to be smarter than humans before taking control of it as it grows." Even the rap artist Snoop Dogg, confronted with an AI version of himself, is wondering: Are we in a movie? I got AI right now that they made for me. The AI could talk to me! Im like, man, this thing can hold a real conversation? Like for real for real?"

So, considering the current state of AI today, we need to assess to what extent we are protected against AIs negative potential. And if we judge that the current levels of protection are insufficient, we need to identify solutions that will maximise AIs positive and innovatory contribution to social progress while safeguarding against its abuse or nefarious impacts.

AI is not completely new. It has been around for 50 years and is integrated into more applications and systems than one might think, but we are now witnessing a proliferation of AI technology at a speed which is overtaking the capacity of regulators to regulate.

To regulate or not regulate? That is the question.

If Hamlet had been with us today, this would surely have been his existential question. Some governments, companies and commentators argue that there is no need for regulation of AI by hard law with hard sanctions but that it is better to take a more flexible approach by means of encouraging guidance and compliance. Building upon this laisser-faire approach, others argue that we can rely on the ability of AI to regulate itself, effectively to provide the technical solutions to a whole range of challenges and risks including those surrounding security, privacy and human oversight. And at the other end of the regulatory spectrum are those who call for the application of the precautionary principle, which I supported in my own doctoral thesis some years ago about the protection of consumers online, namely to regulate even before the risks are identified or clearly emerge. To close the stable door as it were before the horse even enters it!

The positive news is that the IT, legal community and governments are engaging in good faith with one another in an effort to establish common ground and solidarity regarding the potential future governance of AI, starting with this basic question: Do we need to regulate AI by hard regulation, or by voluntary regulation, at the national or international level?"

In my view, it would be dangerous to simply reject the notion of regulation. Governments have a responsibility to at the very least consider the dangers the technology poses in its current stage as well as in the future; and arguably take a proactive approach rather than risk waiting for AI to cause harm before regulating it. We should learn from the experience at the turn of the 20th century when governments only started regulating motor cars only after they caused accidents and killed people.

The borderless nature of AI adds a layer of complexity to attempts to determine the need to regulate and the scope for doing so. In the past, national governments would generally be able to counteract national market failures by giving consumers protective legislation at the national level. Today, this situation has changed. The fundamental legitimacy of authority has always been based on physical presence and territorially-empowered courts, but the global nature of AI technology is addressing every people and every jurisdiction on the planet, and cyberspace itself exists in no physical place. Since every substantive area of human interaction is potentially impacted by AI, sources of law and regulatory interests are potentially all sources of law in every jurisdiction.

This situation calls for a new and more interconnected assessment of different national legal systems, one which recognises that AI functions as a single global system almost without boundaries and thus, if it is to be regulated at all, needs to be regulated globally.

If one accepts the need to regulate AI, one needs at the same time to avoid being overzealous since regulation is a blunt instrument and subject to legal interpretations, and sometimes to political interference. And regulation has the potential to stifle innovation and derail the potential benefits that AI can bring in diverse areas such as vehicle safety, improved productivity, health, computer programming, and much more. And since AI moves so quickly, regulation needs to be receptive to that.

When we think specifically about how we should regulate AI, we need to think each time about the context. It is difficult to regulate AI at a general-purpose level. We do not regulate electronics or computers in general. Instead, we have narrower regulatory regimes.

Earlier, we observed that governments face a choice of regulatory approaches: hard regulation, voluntary regulation, and national versus international level.

In determining whether, and if so, how to regulate governments need also to take into account and base their decisions on factors such as risk governance (risk assessment, management and communication), the science-policy interface, and the link between precaution and innovation.

Interestingly, we are seeing that an increasing number of jurisdictions are beginning to develop regulatory frameworks for AI focused on the value-based and human-centric approach that we touched upon earlier. It is constructive at this juncture to survey some of the principal regulatory approaches to AI, starting with the European Union which is arguably the most advanced jurisdiction as regards AI regulation.

The EU is a group of 27 countries that operate as a cohesive economic & political block. It has developed an internal single market through a standardised system of laws that apply in all member states in matters where members have agreed to act as one. At the time of writing, Europe is finalising draft legislation on AI, namely the EUs AI Act (AIA), which could go into effect by 2024, and which takes a risk-based approach to the regulation of artificial intelligence.

Together with the Digital Markets Act (DMA) and Digital Services Act (DSA), the EU has sought to develop a holistic approach to how authorities seek to govern the use of AI and information technology in society. The EU law takes a risk-based approach to regulating AI, where the obligations for a system are proportionate to the level of risk that it poses.

The AI Act categorises applications of AI into four levels of risk: unacceptable risk, high risk, limited risk and minimal or no risk, as follows:

Applications falling under this definition are banned. They include AI systems using subliminal manipulative or deceptive techniques to distort behaviour, including for example-

The high-risk areas stipulated in EU AI Act include AI systems that may harm peoples health, and human rights as well as the environment. The definition also includes AI systems that influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) in its high-risk list. When confronted with high-risk AI systems, developers will have to comply with stricter obligations in terms of risk management, data governance and technical documentation. The legislation goes on to cover the following parameters:

General-purpose AI - transparency measures

Supporting innovation and protecting citizens rights

The EU AI draft legislation takes pains not to stifle innovation, allowing exemptions for research purposes by promoting regulatory sandboxes, established by public authorities to test AI before its deployment. The draft EU AI Act is now being negotiated between the so-called co-legislators: European Council (the 27 EU Member States acting collectively) and the elected European Parliament.

India has made efforts to standardise responsible AI development. It is now about to consider regulating AI per se. According to the Indian government, AI has proven to be an enabler of the digital and innovation ecosystem, but its application has however given rise to ethical concerns and associated risks around issues such as bias and discrimination in decision-making, privacy violations, lack of transparency in AI systems, and questions about responsibility for harm caused by it.

There is also a more prosaic but very real concern in India that AI and automation could put people out of work in many different fields such as manufacturing, transportation, and customer service. It can have an impact on the economy including the loss of jobs.

It is interesting to observe that India also has one of the highest numbers of ChatGPT users in the world. The chatbot by OpenAI leads to misinformation and violation of privacy by the collection of personal data without consent.

As India provides the technology extensively for governance as well as providing its services to the citizens, one can confidently predict that it will also prepare its own blueprint for artificial intelligence (AI) development. As noted earlier, in doing this, it will be essential to ensure that regulation is not so stringent that it hampers innovation or slows down the implementation of the technology.

As far as governance is concerned, there are already a number of extant AI strategy documents for India: Responsible AI" in February 2020, Operationalizing Principles for Responsible AI" in August 2021. There is also a National Strategy For Artificial Intelligence which covers sectors.

There is a strong likelihood that the AI framework in Europe will inspire India. It is notable that in the recent Digital India Act (DIA) consultation process, India signalled its intention to regulate high-risk AI systems. The EUs AI Act emphasises the need for openness and accountability in the development and deployment of AI systems, based on a human rights approach, and there is every sign that India will follow this ethical model.

As for the Indian Digital Personal Data Protection Bill 2022 (DPDPB 2022), it applies to AI developers who collect and use massive amounts of data to train their algorithm to enhance AI solutions. This implies that AI developers should comply with the key principles of privacy and data protection like purpose limitation, data minimisation, consensual processing, contextual integrity etc as enshrined in DPDPB 2022.

Having surveyed the regulatory climate at the national level in key jurisdictions, it is instructive to study the emergence of the concept of AI Digital Partnerships, and in particular, two of the growing number of international partnerships that have recently been established between the EU and, respectively, the US and India.

Through its recently formed digital partnerships, the EU is seeking to strengthen connectivity across the world, by collaborating with like-minded countries to tackle the digital divide at the international level, based on four pillars of the EUs so-called Digital Compass Strategy - skills, infrastructures, the transformation of business and of public services. The underlying objective is to foster a fair, inclusive and equal digital environment for all. The EU currently has partnerships with India, US, Japan, Korea and Singapore. The Digital Partnerships are focused on safety and security in the following areas: secure 5G/6G; safe and ethical applications of artificial intelligence; and the resilience of global supply chains in the semiconductor industry.

Taking first the EU-US partnership, both reaffirm their commitment to a risk-based approach to AI to advance trustworthy and responsible AI technologies. The EU-US partnership put emphasis on the risks and opportunities of generative AI, including the preparation of a Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management." Three dedicated expert groups are focused now on AI terminology and taxonomy, cooperation on Al standards and tools for trustworthy Al and risk management, and identifying the existing and emerging AI risks.

The groups have issued a list of 65 key AI terms essential to understanding risk-based approaches to AI, along with their EU and US interpretations and shared EU-US definitions; and mapped the respective involvement of the EU and the US in standardisation activities with the goal of identifying relevant AIrelated standards of mutual interest. They have agreed to bring these approaches into cooperation in multilateral discussions such as the G7 and the Organisation for Economic Co-operation and Development (OECD).

The newly formed EU-India Trade and Technology Council has the objective to tackle strategic challenges of trusted technology. The TTC is cast as a permanent structure for dialogue on digital and trade policies, following the format of TTC between the EU and the United States in areas such as digital connectivity, green technology and trade. It met for the first time in Brussels in May 2023, paving the way for cooperation in several strategic areas including in AI governance.

The TTC will offer political guidance as well as the necessary structure to effectively implement political decisions, coordinate technical endeavours, and ensure accountability at the political level. It will help increase EU-India bilateral trade, which is at a historical high, with 120 billion worth of goods traded in 2022. In 2022, according to statistics, 17 billion of digital products and services were traded.

The TTC is divided into several working groups, the first of them being relevant for our purposes, namely the WG on Strategic Technologies, Digital Governance, and Digital Connectivity, which will work on areas such as digital connectivity, AI, 5G/6G, cloud systems, Quantum Computing, Semiconductors, digital training, and big tech platforms. The aim is to find convergence on several aspects of digital policy (with the widest underlying disagreement being over the approach to cross-border data flow regulation and questions surrounding data localization).

The meetings conclusions also point to coordinating policies on AI and semiconductors and working together to bridge the digital skills gap. Cooperation between the two largest democracies on the planet on global digital policy is bound to facilitate access to the rapidly expanding Asian market.

Let us now try to draw some brief conclusions from the foregoing.

First, the ideological differences between countries on whether and how to regulate AI could have broader geopolitical consequences for managing AI and information technology in the years to come. Control over strategic resources, such as data, software, and hardware has become important for all countries. This is demonstrated by discussions over international data transfers, resources linked to cloud computing, the use of open-source software, and so on.

These developments seem, at least for now, to increase fragmentation, mistrust, and geopolitical competition, and as such pose enormous challenges to the goal of establishing an agreed approach to artificial intelligence based on respect for human rights.

To some extent, however, values are evolving into an ideological mechanism that aims to ensure a human rights-centred approach to the role and use of AI. Put differently, an alliance is currently forming around a human rights-oriented view of socio-technical governance, which is embraced and encouraged by like-minded democratic nations. This to me is the direction India, the worlds most populous democracy, should take and engage in greater coordination of developing evaluation and measurement tools that contribute to credible AI regulation, risk management, and privacy-enhancing technologies.

Secondly, we definitely need to avoid the fragmentation of technological ecosystems. Securing AI alignment at the international level is likely to be the major challenge of our century. Like the EU AI Act, the US Algorithmic Accountability Act of 2022 requires organisations to perform impact assessments of their AI systems before and after deployment, including providing more detailed descriptions of data, algorithmic behaviour, and forms of oversight.

Thirdly, undoubtedly, AI will continue to revolutionise society in the coming decades. However, it remains uncertain whether the worlds countries can agree on how technology should be implemented for the greatest possible societal benefit.

Fourth and finally, no matter how AI governance will be finally designed, the way in which it is done must be understandable to the average citizen, to businesses, and to practising policymakers and regulators confronted with a plethora of initiatives at all levels. Al regulations and standards need to be in line with our reality. Taking AI to the next level means increasing the digital prowess of global citizens, fixing the rules for the market power of tech giants, and understanding that transparency is part of the responsible governance of AI. And at the global level, it will be crucial to constantly cooperate strategically with partners within the framework of the International Digital Partnership on AI.

Read the rest here:
Opinion | Artificial Intelligence Whither India? - News18

Related Posts

Comments are closed.