What can the current EU AI approach do to overcome the challenges … – Modern Diplomacy

In the 1970s, as researchers started to grasp the intricacies of genetics, they were inevitably faced with the ethical implications of intentionally altering genes in living organisms. While no technology existed at that time to make such modifications, it was clear that its emergence was just around the corner. In 1975, a landmark conference was held in Asilomar, California, bringing together not just scientists but also about 140 professionals ranging from legal experts to writers and journalists. The goal was to address the potential risks associated with gene manipulation. The conference led to the creation of a set of guiding principles that continue to have a lasting impact today. Asilomar serves as a singular example of effective self-regulation, proactive risk mitigation, and open communication with the public.

Today, as we stand on the cusp of a new, AI-driven, era, theres again a palpable sense of anticipation across the globe, while new risks and opportunities spread out before us. AI has swiftly transitioned from being a technological novelty to a pervasive force, reshaping our daily lives and industries, ranging from pioneering projects like OpenAI to autonomous transport. he allure of generative AI applications has dwarfed past technological frenzies. While innovations like the internet, steam engine, and printing press have previously ushered in transformative epochs, AI holds the promise of instigating the most monumental shift in human history.

However, as this wave of innovation surges forward, the need for a comprehensive regulatory framework becomes increasingly urgent. An important goal, agreed by several stakeholders, should be to ensure the ethical, secure, and equitable use of AI for all. The conversation is not a hypothetical debate about the distant future; its about what must be done today to secure a prosperous future for humanity and the planet.

Numerous stakeholdersgovernments, international organisations, NGOs, and tech giants, are scrambling to address the myriad challenges posed by AI. Whether driven by genuine concern or merely to cultivate a contemporary image, different initiatives are underway. The European Commission is pioneering efforts to craft the first-ever legal framework for AI[1]. The proposed legislation establishes different rules for different risk levels and has the potential to address AI risks for the society. Yet, it is uncertain whether this European effort can address all current and especially future challenges. Two glaring gaps persist in the European legislative effort but also in numerous parallel national or international initiatives.

First, the vast majority of the current efforts are focused on the present and the impacts of narrow AI, that is the current version of AI tools capable of performing specific tasks (like ChatGPT, AlphaFold or AlphaGo). Yet, this preoccupation with narrow AI obscures the monumental, potentially catastrophic challenges presented by Artificial General Intelligence (AGI). AGI represents a form of AI with the capacity to comprehend, learn, and apply knowledge across a wide range of tasks and domains[2]. An AGI system connected to the internet and myriad sensors and smart devices could solve complex problems, seek information by any means (even directly interacting with humans), make logical deductions, and even rewrite its own code. AGI does not exist today, yet according to estimations by experts[3] could be available between 2035-2040, a timeline that coincides with the typical time needed to solidify a global AGI treaty and governance system. This synchronicity underscores the pressing need to pivot our focus, infusing foresight methodologies to discern and tackle imminent challenges and prepare for unknown ones.

The second challenge for the ongoing legislative efforts is the fragmentation. AI systems, much like living organisms, transcend political borders. Attempting to regulate AI through national or regional efforts entails a strong potential for failure, given the likely proliferation capabilities of AI. Major corporations and emerging AI startups outside the EUs control will persist in creating new technologies, making it nearly impossible to prevent European residents from accessing these advancements. In this light, several stakeholders[4] suggest that any policy and regulatory framework for AI must be established on a global scale. Additionally, Europes pursuit of continent-wide regulation poses challenges to remaining competitive in the global AI arena, if the sector enjoys a more relaxed regulatory framework in other parts of the world. Furthermore, Article 6 of the proposed EU Artificial Intelligence Act introduces provisions for high-risk AI systems, requiring developers and deployers themselves to ensure safety and transparency. However, the provisions self-assessment nature raises concerns about its effectiveness.

What must be done

In this rapidly changing and complex global landscape, is there any political space for the EU to take action? The pan-European study OurFutures[5] reveals that the vast majority of participants express deep concern about the future, with technology-related issues ranking high on their list, alongside social justice, nature, well-being, education, and community. Moreover, despite the emerging signs of mistrust towards governments globally, citizens in the EU maintain confidence in government leaders as catalysts for positive change (picture 1), while they also prioritize human condition and environment over economic prosperity.

Picture 1: Who are the changemakers and what matters more (OurFutures)

The clock is ticking, but governments still have the opportunity to address societal concerns by taking bold steps. In the case of AI, the EU should assume a leadership role in global initiatives and embrace longtermism as a fundamental value, ensuring a sustainable future for current and future generations:

EU as a global sounding board. While the European Commissions legislative initiative on A.I. signifies a leap in the right direction, structured productive collaboration with key international partners like USA, China, UNESCO and OECD is essential, with the aim to set-up a global AI regulatory framework. The success of the Asilomar conference was rooted in its ability to create a voluntary set of globally respected rules. Violators faced condemnation from the global community, exemplified by the case of He Jiankui[6], who created the worlds first genetically edited babies and was subsequently sentenced to prison. Drawing from its tradition of negotiating regulations with many diverse stakeholders, the EU should champion a global initiative under the UN to forge a consensus on AI regulation, and adapt to the diversity of approaches shown by other AI actors.

A technology monitoring system. A global technology observatory has been already suggested by the Millennium Project[7], the Carnegie Council for Ethics in International Affairs[8] and other experts. This organization should be empowered to supervise AI research, evaluate high-risk AI systems, and grant ISO-like certifications to AI systems that comply with standards. It should track technological progress and employ foresight methods to anticipate future challenges, particularly as AGI looms on the horizon. Such an entity, perhaps aptly named the International Science and Technology Organization (ISTO), building on the work done by ISO/IEC and the IEEE on ad hoc standards, could eventually extend its purview beyond AI, encapsulating fields like synthetic biology and cognitive science. Avoiding the usual challengesdissent over nuances, apprehensions of national sovereignty, and the intricate dance of geopolitics, could be done through the emergence of such an organism from the mentioned already extant standardization organizations. However, the EU, with its rich legacy, is perfectly poised to champion this cause, in close collaboration with the UN to expedite its realization.

Embrace Longtermism. Longtermism, the ethical view that prioritizes positively influencing the long-term future, is a moral imperative in an era of exponential technological advancements and complex challenges like the climate crisis. Embracing longtermism means designing policies that address risks as we transition from sub-human AI to greater-than-human AI. For the European Commission, initiatives to address AI challenges should not be viewed as mere regulation but as a unique opportunity to etch its commitment to a secure, ethical AI future into history. A longtermism perspective in AI matches with the idea of AI Alignment put forth by numerous scholars[9], which addresses diverse concerns related to AI safety, aiming to ensure that AI remains aligned with our objectives and avoids unintended consequences of going astray.

As the world races against the clock to regulate AI, the EU has the potential to be a trailblazer. EUs initiatives to address AI challenges should not be considered merely as a regulatory endeavorits an unparalleled opportunity. Embracing longtermism and spearheading the establishment of an ISTO could be EUs crowning achievement. Its time for the EU to step up, engage in proactive diplomacy, and pave the way for a sustainable AI future that respects the values and concerns of people today and tomorrow.

[1] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[2] https://www.gartner.com/en/information-technology/glossary/artificial-general-intelligence-agi

[3] Macey-Dare, Rupert, How Soon is Now? Predicting the Expected Arrival Date of AGI- Artificial General Intelligence (June 30, 2023). Available at SSRN:https://ssrn.com/abstract=4496418

[4] For example https://www.forbes.com/sites/hecparis/2022/09/09/regulating-artificial-intelligenceis-global-consensus-possible/?sh=a505f237035c

[5] https://knowledge4policy.ec.europa.eu/projects-activities/ourfutures-images-future-europe_en

[6] https://www.bbc.com/news/world-asia-china-50944461

[7] https://www.millennium-project.org/projects/workshops-on-future-of-worktechnology-2050-scenarios/

[8]Global AI Observatory (GAIO) : https://www.carnegiecouncil.org/media/article/a-framework-for-the-international-governance-of-ai

[9] For example: http://lcfi.ac.uk/projects/completed-projects/value-alignment-problem/

Excerpt from:
What can the current EU AI approach do to overcome the challenges ... - Modern Diplomacy

Related Posts

Comments are closed.