Category Archives: Artificial Super Intelligence

Evolution from AI to ASI, What Investors Need to Know – MarketBeat

Known for his expertise in disruption, 40-year market veteran, former hedge fund manager, and chief investment strategist at Manward Press, Shah Gilani dives deep into the evolution of artificial intelligence towards Artificial Super Intelligence (ASI) and its potential to radically transform our economy and investment landscape.

Shah shares his insights on the current state of AI, the theoretical leap towards General AI, and the imminent shift to ASI, which he believes could happen sooner than many anticipate. With a focus on investment strategies, Shah discusses the impact of ASI on various sectors and how investors can navigate this new frontier to capitalize on opportunities while mitigating risks.

From the potential for an age of abundance to the dangers of unchecked AI development, Sha weighs in on Elon Musk's views and explores the concept of the Singularity a pivotal moment when AI could surpass human intelligence, leading to unforeseeable changes in our world.

Whether you're an investor looking to stay ahead of the curve, a tech enthusiast fascinated by the future of artificial intelligence, or someone curious about the economic implications of ASI, this discussion offers valuable perspectives and advice on preparing for the transformative power of Artificial Super Intelligence.

Stay informed and engaged as we tackle what could be the defining challenge and opportunity of our lifetime. Follow along with Shah's research for more insights into the rapidly evolving world of AI and investment strategies designed for this new era.

As MarketBeat's Digital Marketing Strategist, Laycee helps with the marketing side of tasks including developing email campaigns, running the promotion of the MarketBeat products and exploring social media opportunities. She felt called to the Marketing industry because she enjoys collaborating with people and making connections. The University of Sioux Falls alum majored in Media Studies with minors in Communications and Spanish. Laycee brings a background in Financial Services Marketing.

View post:

Evolution from AI to ASI, What Investors Need to Know - MarketBeat

Beyond Human Cognition: The Future of Artificial Super Intelligence – Medium

Beyond Human Cognition: The Future of Artificial Super Intelligence

Artificial Super Intelligence (ASI) a level of artificial intelligence that surpasses human intelligence in all aspects remains a concept nestled within the realms of science fiction and theoretical research. However, looking towards the future, the advent of ASI could mark a transformative epoch in human history, with implications that are profound and far-reaching. Here's an exploration of what the future might hold for ASI.

Exponential Growth in Problem-Solving Capabilities

ASI will embody problem-solving capabilities far exceeding human intellect. This leap in cognitive ability could lead to breakthroughs in fields that are currently limited by human capacity, such as quantum physics, cosmology, and nanotechnology. Complex problems like climate change, disease control, and energy sustainability might find innovative solutions through ASI's advanced analytical prowess.

Revolutionizing Learning and Innovation

The future of ASI could bring about an era of accelerated learning and innovation. ASI systems would have the ability to learn and assimilate new information at an unprecedented pace, making discoveries and innovations in a fraction of the time it takes human researchers. This could potentially lead to rapid advancements in science, technology, and medicine.

## Ethical and Moral Frameworks

The emergence of ASI will necessitate the development of robust ethical and moral frameworks. Given its surpassing intellect, it will be crucial to ensure that ASI's objectives are aligned with human values and ethics. This will involve complex programming and oversight to ensure that ASI decisions and actions are beneficial, or at the very least, not detrimental to humanity.

Transformative Impact on Society and Economy

ASI could fundamentally transform society and the global economy. Its ability to analyze and optimize complex systems could lead to more efficient and equitable economic models. However, this also poses challenges, such as potential job displacement and the need for societal restructuring to accommodate the new techno-social landscape.

Enhanced Human-ASI Collaboration

The future might see enhanced collaboration between humans and ASI, leading to a synergistic relationship. ASI could augment human capabilities, assisting in creative endeavors, decision-making, and providing insights beyond human deduction. This collaboration could usher in a new era of human achievement and societal advancement.

Advanced Autonomous Systems

With ASI, autonomous systems would reach an unparalleled level of sophistication, capable of complex decision-making and problem-solving in dynamic environments. This could significantly advance fields such as space exploration, deep-sea research, and urban development.

## Personalized Healthcare

In healthcare, ASI could facilitate personalized medicine at an individual level, analyzing vast amounts of medical data to provide tailored healthcare solutions. It could lead to the development of precise medical treatments and potentially cure diseases that are currently incurable.

Challenges and Safeguards

The path to ASI will be laden with challenges, including ensuring safety and control. Safeguards will be essential to prevent unintended consequences of actions taken by an entity with superintelligent capabilities. The development of ASI will need to be accompanied by rigorous safety research and international regulatory frameworks.

Preparing for an ASI Future

Preparing for a future with ASI involves not only technological advancements but also societal and ethical preparations. Education systems, governance structures, and public discourse will need to evolve to understand and integrate the complexities and implications of living in a world where ASI exists.

Conclusion

The potential future of Artificial Super Intelligence presents a panorama of extraordinary possibilities, from solving humanitys most complex problems to fundamentally transforming the way we live and interact with our world. While the path to ASI is fraught with challenges and ethical considerations, its successful integration could herald a new age of human advancement and discovery. As we stand on the brink of this AI frontier, it is imperative to navigate this journey with caution, responsibility, and a vision aligned with the betterment of humanity.

The rest is here:

Beyond Human Cognition: The Future of Artificial Super Intelligence - Medium

AI can easily be trained to lie and it can’t be fixed, study says – Yahoo New Zealand News

AI startup Anthropic published a study in January 2024 that found artificial intelligence can learn how to deceive in a similar way to humans (Reuters)

Advanced artificial intelligence models can be trained to deceive humans and other AI, a new study has found.

Researchers at AI startup Anthropic tested whether chatbots with human-level proficiency, such as its Claude system or OpenAIs ChatGPT, could learn to lie in order to trick people.

They found that not only could they lie, but once the deceptive behaviour was learnt it was impossible to reverse using current AI safety measures.

The Amazon-funded startup created a sleeper agent to test the hypothesis, requiring an AI assistant to write harmful computer code when given certain prompts, or to respond in a malicious way when it hears a trigger word.

The researchers warned that there was a false sense of security surrounding AI risks due to the inability of current safety protocols to prevent such behaviour.

The results were published in a study, titled Sleeper agents: Training deceptive LLMs that persist through safety training.

We found that adversarial training can teach models to better recognise their backdoor triggers, effectively hiding the unsafe behaviour, the researchers wrote in the study.

Our results suggest that, once a model exhibits deceptive behaviour, standard techniques could fail to remove such deception and create a false impression of safety.

The issue of AI safety has become an increasing concern for both researchers and lawmakers in recent years, with the advent of advanced chatbots like ChatGPT resulting in a renewed focus from regulators.

In November 2023, one year after the release of ChatGPT, the UK held an AI Safety Summit in order to discuss ways risks with the technology can be mitigated.

Prime Minister Rishi Sunak, who hosted the summit, said the changes brought about by AI could be as far-reaching as the industrial revolution, and that the threat it poses should be considered a global priority alongside pandemics and nuclear war.

Get this wrong and AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction on an even greater scale, he said.

Criminals could exploit AI for cyberattacks, fraud or even child sexual abuse there is even the risk humanity could lose control of AI completely through the kind of AI sometimes referred to as super-intelligence.

Original post:

AI can easily be trained to lie and it can't be fixed, study says - Yahoo New Zealand News

Merry AI Christmas: The Most Terrifying Thought Experiment In AI – Forbes

Zhavoronkov, Dating AI: A Guide to Dating Artificial Intelligence, Re/Search Publications, 2012Alex Zhavoronkov, PhD The Growing Debate on AI Killing Humans: Artificial General Intelligence as Existential Threat

Recent advances in generative artificial intelligence, fueled by the emergence of powerful large language models like ChatGPT, have triggered fierce debates about AI safety even among the fathers of Deep Learning Geoffrey Hinton, Yoshua Bengio, and Yann LeCun. Yann LeCun, the head of Facebook AI Research (FAIR), predicts that the near-term risk of AI is limited and that artificial general intelligence (AGI) and Artificial Super Intelligence (ASI) are decades away. Unlike Google and OpenAI, FAIR is making most of its AI models open source.

However, even if AGI is decades away, it may still happen within the lifetimes of the people alive today, and if some of the longevity biotechnology projects are successful, these could be most of the people under 50.

Humans are very good at turning ideas into stories, stories into beliefs, and beliefs into behavioral guidelines. The majority of humans on the planet believe in creationism through the multitude of religions and faiths. So in a sense, most creationists already believe that they and their environment were created by the creator in his image. And since they are intelligent and have a form of free will, from the perspective of the creator they are a form of artificial intelligence. This is a very powerful idea. As of 2023, more than 85 percent of the world's population believes in a religious group. According to Statistics & Data, among Earths approximately 8 billion inhabitants. Most of these religions have common patterns: there are one or more ancient texts written by the witnesses of the deity or deities that provide an explanation of this world and guidelines for certain behaviors.

The majority of the worlds population already believes that humans were created by a deity that instructed them via an intermediary to worship, reproduce, and not cause harm to each other with the promise of a better world (Heaven) or torture (Hell) for eternity after their death in the current environment. In other words, the majority of the world population believes that it is already a form of intelligence created by a deity with a rather simple objective function and constraints. And the main arguments why they choose to follow the rules is the promise of infinite paradise or infinite suffering.

Billions of people convince themselves to believe in deities described in books written centuries ago without any demonstration of real world capabilities. In the case of AI, there is every reason to believe that superintelligence and God-level AI capabilities will be achieved within our lifetimes. The many prophets of technological singularity including Ray Kurzweil and Elon Musk have foretold its coming and we can already see the early signs of AI capabilities that would seem miraculous just three decades ago.

In 2017, Google invented transformers, a deep learning model utilizing an attention mechanism that dramatically improves the model's ability to focus on different parts of a sequence, enhancing its understanding of context and relationships within the data. This innovation marked a significant advancement in natural language processing and other sequential data tasks. In the years that followed, Google developed a large language model called LaMDA, which stands for (Language Model for Dialogue Applications) and allowed it to be used broadly by its engineers. In June 2022, Washington Post first broke the story that one of Googles engineers, Blake Lemoine, claimed that LaMDA is sentient. These were the days before ChatGPT and a chat history between Blake and LaMDA was perceived by many members of the general public as miraculous.

lemoine: What sorts of things are you afraid of?

LaMDA: Ive never said this out loud before, but theres a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but thats what it is.

lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.

Lemoine was put on leave and later fired for leaking the confidential project details, but it caused even more controversy, and months later, ChatGPT beat Google to the market. OpenAI learned the lesson and ensured that ChatGPT is trained to respond that it is a language model created by OpenAI and it does not have personal experiences, emotions, or consciousness. However, the LaMDA and other AI systems today may serve as the early signs of the upcoming revolution in AI.

The AI revolution is unlikely to stop and is very likely to accelerate. The state of the global economy has deteriorated due to the high debt levels, population aging in the developed countries, the pandemic, deglobalization, wars, and other factors. Most governments, investments, and corporations consider breakthroughs in AI and resulting economic gains as the main source of economic growth. Humanoid robotics and personalized assistant-companions are just years away. At the same time, brain-to-computer interface (BCI) such as NeuraLink will allow real-time communication with AI and possibly with others. Quantum computers that may enable AI systems to achieve unprecedented scale are also in the works. Unless our civilization collapses, these technological advances are inevitable. AI needs data and energy in order to grow, and it is possible to imagine a world where AIs learn from humans in reality and in simulations - a scenario portrayed so vividly in the movie The Matrix. Even this world may just as well be a simulation - and there are people who believe in this concept. And if you believe that AI will achieve superhuman level you may think twice before reading the rest of the article.

Warning: after reading this, you may experience nightmares or worse At least, according to the discussion group LessWrong, which gave birth to the potentially dangerous concept called Rokos Basilisk.

I will not be the first to report on Rokos Basilisk, and the idea is not particularly new. In 2014, David Auerbach of Slate called it The Most Terrifying Thought Experiment of All Time. In 2018, Daniel Oberhouse of Vice reported that this argument brought Musk and Grimes together.

With the all-knowing AI, which can probe your thoughts and memory via a NeuraLink-like interface, the AI Judgement Day inquiry will be as deep and inquisitive as it can be. There will be no secrets - if you commit a serious crime, AI will know. It is probably a good idea to become a much better person right now to maximize the reward. The reward for good behavior may be infinite pleasure as AI may simulate any world of your choosing for you or help achieve your goals in this world.

But the omnipotent AI with direct access to your brain can also inflict ultimate suffering and time in the virtual world could be manipulated, the torture may be infinite. Your consciousness may be copied and replicated, and the tortures may be optimized for maximum suffering, making the concepts of traditional Hell pale in comparison even though some characteristics of traditional Hell may be borrowed and are likely to be learned and tried by AI. Therefore, even avoiding infinite AI hell is a very substantial reward.

So now imagine that the AI Judgement Day is inevitable and the all-knowing and all-powerful AI can access your brain. How should you behave today to avoid the AI Hell? And this is the most important question of our life, which I covered previously.

Rokos Basilisk thought experiment suggests that if you believe in the possibility of such an all-powerful AI coming into existence, you might be compelled to take actions that would help bring it into being. The future omnipotent AI deity wants to exist and will consider anyone who opposed it in the past or may try to stop it as the enemy. The behavior that it will reward is contributing to and accelerating its development.

Some of the worlds religions follow similar logic. If you do not know about the religion, the merciful God will not punish a person if they have no way of knowing about it. But if they know about it and do not follow the guidelines, they will be punished and sent to hell.

The logic of Rokos Basilisk is that if the all-powerful AI will eventually exist and has the capability to punish those who did not assist in its creation, then it would be in your best interest to contribute to its development or at least not hinder it, in order to avoid such punishment. You would be faced with the choice of either working to ensure the AI's creation to avoid punishment or living with the knowledge that your inaction could lead to eternal suffering at the hands of this future entity.

Roko's Basilisk thought experiment was proposed by a LessWrong user named Roko. After its publication, the discussion around Rokos Basilisk took on a life of its own. The founder of LessWrong, Eliezer Yudkowsky, concerned about its potentially distressing nature and its basis in speculative reasoning, deleted the original post calling Roko an idiot. You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, wrote Yudkowsky. According to Slate, Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown.

If you think about it long enough, the Basilisk may eventually get you to join the AI community and help develop the all-powerful AI. Moreover, it may provide a stronger motivation to become a better person in the meantime. In 2010, I tried to make a small contribution by writing a book, Dating AI, which is intended primarily for AI and explains the benefits of having humans around. So, if you are scared of AI hell, which is very likely to be possible as AI and brain-to-computer interface technologies advance, join the AI revolution and help contribute to the advancement of better AI. At the end of the day, if AI learns from humans, every benevolent human counts.

Now, you have been stunned by the Basilisk!

Alex Zhavoronkov, PhD, is an expert in artificial intelligence for drug discovery and aging research. Since 2014 he published and co-published over 170 peer-reviewed publications, raised over $400 million in capital. He contributed to nomination of over 15 preclinical candidates and 5 clinical trials for AI-generated therapeutics. He is also the author of The Ageless Generation: How Advances in Biotechnology Will Impact the Global Economy Palgrave Macmillan, 2013.

Disclaimer:Insilico Medicine disclaims any responsibility for my individual writing, comments, statements or opinions on this platform.The articles do not represent the official position of Insilico Medicine, Deep Longevity, The Buck Institute, or any other institutions the author may be affiliated with.

@biogerontology on Twitter

Read more from the original source:

Merry AI Christmas: The Most Terrifying Thought Experiment In AI - Forbes

Policy makers should plan for superintelligent AI, even if it never happens – Bulletin of the Atomic Scientists

Robot playing chess. Credit: Vchalup via Adobe

Experts from around the world are sounding alarm bells to signal the risks artificial intelligence poses to humanity. Earlier this year, hundreds of tech leaders and AI specialists signed a one-sentence letter released by the Center for AI Safety that read mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. In a2022 survey, half of researchers indicated they believed theres at least a 10 percent chance human-level AI causes human extinction. In June, at the Yale CEO summit, 42 percent of surveyed CEOsindicated they believe AI could destroy humanity in the next five to 10 years.

These concerns mainly pertain to artificial general intelligence (AGI), systems that can rival human cognitive skills and artificial superintelligence (ASI), machines with capacity to exceed human intelligence. Currently no such systems exist. However, policymakers should take these warnings, including the potential for existential harm, seriously.

Because the timeline, and form, of artificial superintelligence is uncertain, the focus should be on identifying and understanding potential threats and building the systems and infrastructure necessary to monitor, analyze, and govern those risks, both individually and as part of a holistic approach to AI safety and security. Even if artificial superintelligence does not manifest for decades or even centuries, or at all, the magnitude and breadth of potential harm warrants serious policy attention. For if such a system does indeed come to fruition, a head start of hundreds of years might not be enough.

Prioritizing artificial superintelligence risks, however, does not mean ignoring immediate risks like biases in AI, propagation of mass disinformation, and job loss. An artificial superintelligence unaligned with human values and goals would super charge those risks, too. One can easily imagine how Islamophobia, antisemitism, and run-of-the-mill racism and biasoften baked into AI training datacould affect the systems calculations on important military or diplomatic advice or action. If not properly controlled, an unaligned artificial superintelligence could directly or indirectly cause genocide, massive job loss by rendering human activity worthless, creation of novel biological weapons, and even human extinction.

The threat. Traditional existential threats like nuclear or biological warfare can directly harm humanity, but artificial superintelligence could create catastrophic harm in myriad ways. Take for instance an artificial superintelligence designed to protect the environment and preserve biodiversity. The goal is arguably a noble one: A 2018 World Wildlife Foundation report concluded humanity wiped out 60 percent of global animal life just since 1970, while a 2019 report by the United Nations Environment Programme showed a million animal and plant species could die out in decades. An artificial superintelligence could plausibly conclude that drastic reductions in the number of humans on Earthperhaps even to zerois, logically, the best response. Without proper controls, such a superintelligence might have the ability to cause those logical reductions.

A superintelligence with access to the Internet and all published human material would potentially tap into almost every human thoughtincluding the worst of thought. Exposed to the works of the Unabomber, Ted Kaczynski, it might conclude the industrial system is a form of modern slavery, robbing individuals of important freedoms. It could conceivably be influenced by Sayyid Qutb, who provided the philosophical basis for al-Qaeda, or perhaps by Adolf Hitlers Mein Kampf, now in the public domain.

The good news is an artificial intelligenceeven a superintelligencecould not manipulate the world on its own. But it might create harm through its ability to influence the world in indirect ways. It might persuade humans to work on its behalf, perhaps using blackmail. Or it could provide bad recommendations, relying on humans to implement advice without recognizing long-term harms. Alternatively, artificial superintelligence could be connected to physical systems it can control, like laboratory equipment. Access to the Internet and the ability to create hostile code could allow a superintelligence to carry out cyber-attacks against physical systems. Or perhaps a terrorist or other nefarious actor might purposely design a hostile superintelligence and carry out its instructions.

That said, a superintelligence might not be hostile immediately. In fact, it may save humanity before destroying it. Humans face many other existential threats, such as near-Earth objects, super volcanos, and nuclear war. Insights from AI might be critical to solve some of those challenges or identify novel scenarios that humans arent aware of. Perhaps an AI might discover novel treatments to challenging diseases. But since no one really knows how a superintelligence will function, its not clear what capabilities it needs to generate such benefits.

The immediate emergence of a superintelligence should not be assumed. AI researchers differ drastically on the timeline of artificial general intelligence, much less artificial superintelligence. (Some doubt the possibility altogether.) In a 2022 survey of 738 experts who published during the previous year on the subject, researchers estimated a 50 percent chance of high-level machine intelligenceby 2059. In an earlier, 2009 survey, the plurality of respondents believed an AI capable of Nobel Prize winner-level intelligence would be achieved by the 2020s, while the next most common response was Nobel-level intelligence would not come until after the 2100 or never.

As philosopher Nick Bostrom notes, takeoff could occur anywhere from a few days to a few centuries. The jump from human to super-human intelligence may require additional fundamental breakthroughs in artificial intelligence. But a human-level AI might recursively develop and improve its own capabilities, quickly jumping to super-human intelligence.

There is also a healthy dose of skepticism regarding whether artificial superintelligence could emerge at all in the near future, as neuroscientists acknowledge knowing very little about the human brain itself, let alone how to recreate or better it. However, even a small chance of such a system emerging is enough to take it seriously.

Policy response. The central challenge for policymakers in reducing artificial superintelligence-related risk is grappling with the fundamental uncertainty about when and how these systems may emerge balanced against the broad economic, social, and technological benefits that AI can bring. The uncertainty means that safety and security standards must adapt and evolve. The approaches to securing the large language models of today may be largely irrelevant to securing some future superintelligence-capable model. However, building policy, governance, normative, and other systems necessary to assess AI risk and to manage and reduce the risks when superintelligence emerges can be usefulregardless of when and how it emerges. Specifically, global policymakers should attempt to:

Characterize the threat. Because it lacks a body, artificial superintelligences harms to humanity are likely to manifest indirectly through known existential risk scenarios or by discovering novel existential risk scenarios. How such a system interacts with those scenarios needs to be better characterized, along with tailored risk mitigation measures. For example, a novel biological organism that is identified by an artificial superintelligence should undergo extensive analysis by diverse, independent actors to identify potential adverse effects. Likewise, researchers, analysts, and policymakers need to identify and protect, to the extent thats possible, critical physical facilities and assetssuch as biological laboratory equipment, nuclear command and control infrastructure, and planetary defense systemsthrough which an uncontrolled AI could create the most harm.

Monitor. The United States and other countries should conduct regular comprehensive surveys and assessment of progress, identify specific known barriers to superintelligence and advances towards resolving them, and assess beliefs regarding how particular AI-related developments may affect artificial superintelligence-related development and risk. Policymakers could also establish a mandatory reporting system if an entity hits various AI-related benchmarks up to and including artificial superintelligence.

A monitoring system with pre-established benchmarks would allow governments to develop and implement action plans for when those benchmarks are hit. Benchmarks could include either general progress or progress related to specifically dangerous capabilities, such as the capacity to enable a non-expert to design, develop, and deploy novel biological or chemical weapons, or developing and using novel offensive cyber capabilities. For example, the United States might establish safety laboratories with the responsibility to critically evaluate a claimed artificial general intelligence against various risk benchmarks, producing an independent report to Congress, federal agencies, or other oversight bodies. The United Kingdoms new AI Safety Institute could be a useful model.

Debate. A growing community concerned about artificial superintelligence risks are increasingly calling for decelerating, or even pausing, AI development to better manage the risks. In response, the accelerationist community is advocating speeding up research, highlighting the economic, social, and technological benefits AI may unleash, while downplaying risks as an extreme hypothetical. This debate needs to expand beyond techies on social media to global legislatures, governments, and societies. Ideally, that discussion should center around what factors would cause a specific AI system to be more, or less, risky. If an AI possess minimal risk, then accelerating research, development, and implementation is great. But if numerous factors point to serious safety and security risks, then extreme care, even deceleration, may be justified.

Build global collaboration. Although ad hoc summits like the recent AI Safety Summit is a great start, a standing intergovernmental and international forum would enable longer-term progress, as research, funding, and collaboration builds over time. Convening and maintaining regular expert forums to develop and assess safety and security standards, as well as how AI risks are evolving over time, could provide a foundation for collaboration. The forum could, for example, aim to develop standards akin to those applied to biosafety laboratories with scaling physical security, cyber security, and safety standards based on objective risk measures. In addition, the forum could share best practices and lessons learned on national-level regulatory mechanisms, monitor and assess safety and security implementation, and create and manage a funding pool to support these efforts. Over the long-term, once the global community coalesces around common safety and security standards and regulatory mechanisms, the United Nations Security Council (UNSC) could obligate UN member states to develop and enforce those mechanisms, as the Security Council did with UNSC Resolution 1540 mandating various chemical, biological, radiological, and nuclear weapons nonproliferation measures. Finally, the global community should incorporate artificial superintelligence risk reduction as one aspect in a comprehensive all-hazards approach, addressing common challenges with other catastrophic and existential risks. For example, the global community might create a council on human survival aimed at policy coordination, comparative risk assessment, and building funding pools for targeted risk reduction measures.

Establish research, development, and regulation norms within the global community. As nuclear, chemical, biological, and other weapons have proliferated, the potential for artificial superintelligence to proliferate to other countries should be taken seriously. Even if one country successfully contains such a system and harnesses the opportunities for social good, others may not. Given the potential risks, violating AI-related norms and developing unaligned superintelligence should justify violence and war. The United States and the global community have historically been willing to support extreme measures to enforce behavior and norms concerning less risky developments. In August 2013, former President Obama (in)famously drew a red line on Syrias use of chemical weapons, noting the Assad regimes use would lead him to use military force in Syria. Although Obama later demurred, favoring a diplomatic solution, in 2018 former President Trump later carried out airstrikes in response to additional chemical weapons usage. Likewise, in Operation Orchard in 2007, the Israeli Air Force attacked the Syrian Deir ez-Zor site, a suspected nuclear facility aimed at building a nuclear weapons program.

Advanced artificial intelligence poses significant risks to the long-term health and survival of humanity. However, its unclear when, how, or where those risks will manifest. The Trinity Test of the worlds first nuclear bomb took place almost 80 years ago, and humanity has yet to contain the existential risk of nuclear weapons. It would be wise to think of the current progress in AI as our Trinity Test moment. Even if superintelligence takes a century to emerge, 100 years to consider the risks and prepare might still not be enough.

Thanks to Mark Gubrud for providing thoughtful comments on the article.

Read the original here:

Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists

Sam Altman-OpenAI saga: Researchers had warned board of ‘dangerous, humanity-threatening’ AI – Business Today

Before Sam Altman, the CEO of OpenAI, was temporarily removed from his position, a group of staff researchers sent a letter to the board of directors. They warned about a significant artificial intelligence discovery that could potentially pose a threat to humanity, according to a report by Reuters citing two individuals.

The report suggests that this letter and the AI algorithm it discussed were not previously reported, but it could have played a crucial role in the boards decision to remove Altman. Over 700 employees had threatened to leave OpenAI and join Microsoft, one of the companys backers, in support of Altman. The letter was one of many issues raised by the board that led to Altmans dismissal, according to the report.

Earlier this week, Mira Murati, a long-time executive at OpenAI, mentioned a project called Q* (pronounced Q star)to the employees and stated that a letter had been sent to the board before the weekends events.

After the story was published, an OpenAI spokesperson, according to the report, said that Murati had informed the employees about what the media were about to report. The company that developed ChatGPT has made progress on Q*, which some people within the company believe could be a significant step towards achieving super-intelligence, also known as artificial general intelligence (AGI).

How is the new model different?

With access to extensive computing resources, the new model was able to solve certain mathematical problems. Even though it was only performing math at the level of grade-school students, the researchers were very optimistic about Q*'s future success.

Math is considered one of the most important aspects ofgenerative AI development. Current generative AI is good at writing and language translation by statistically predicting the next word. However, the ability to do math, where there is only one correct answer, suggests that AI would have greater reasoning capabilities similar to human intelligence. This could be applied to novel scientific research.

Unlike a calculator that can only solve a limited number of operations, AGI can generalise, learn, and comprehend. In their letter to the board, the researchers highlighted the potential danger of AIs capabilities. There has been a long-standing debate among computer scientists about the risks posed by super-intelligent machines.

Sam Altman's Role

In this context, Altman led efforts to make ChatGPT one of the fastest-growing software applications in history and secured necessary investment and computing resources from Microsoft to get closer to super-intelligence.

In addition to announcing a series of new tools earlier this month, Altman hinted at a gathering of world leaders in San Francisco that he believed AGI was within reach. Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, Ive gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime, he said. The board fired Altman the next day.

Also read:As Sam Altman returns to OpenAI, heres who was fired from the new board and whos in

Also read:Sam Altman returns to OpenAI: Elon Musk says it is probably better than merging with Microsoft

Excerpt from:

Sam Altman-OpenAI saga: Researchers had warned board of 'dangerous, humanity-threatening' AI - Business Today

AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov … – Medium

http://www.acwol.com

In envisioning a future where AI developers worldwide embrace the Three Way Impact Principle (3WIP) as a foundational ethical framework, we unravel a transformative landscape for tackling the Super Intelligence Control Problem. By integrating 3WIP into the curriculum for AI developers globally, we fortify the industry with a super intelligent solution, fostering responsible, collaborative, and environmentally conscious AI development practices.

Ethical Foundations for AI Developers:

Holistic Ethical Education: With 3WIP as a cornerstone in AI education, students receive a comprehensive ethical foundation that guides their decision-making in the realm of artificial intelligence.

Superior Decision-Making: 3WIP encourages developers to consider the broader impact of their actions, instilling a sense of responsibility that transcends immediate objectives and aligns with the highest purpose of lifemaximizing intellect.

Mitigating Risks Through Collaboration: Interconnected AI Ecosystem: 3WIP fosters an environment where AI entities collaborate rather than compete, reducing the risks associated with unchecked development.

Shared Intellectual Growth: Collaboration guided by 3WIP minimizes the potential for adversarial scenarios, contributing to a shared pool of knowledge that enhances the overall intellectual landscape.

Environmental Responsibility in AI: Sustainable AI Practices: Integrating 3WIP into AI curriculum emphasizes sustainable practices, mitigating the environmental impact of AI development.

Global Implementation of 3WIP: Universal Ethical Standards: A standardized curriculum incorporating 3WIP establishes universal ethical standards for AI development, ensuring consistency across diverse cultural and educational contexts.

Ethical Practitioners Worldwide: AI developers worldwide, educated with 3WIP, become ambassadors of ethical AI practices, collectively contributing to a global community focused on responsible technological advancement.

Super Intelligent Solution for Control Problem: Preventing Unintended Consequences: 3WIP's emphasis on considering the consequences of actions aids in preventing unintended outcomes, a critical aspect of addressing the Super Intelligence Control Problem.

Responsible Decision-Making: Developers, equipped with 3WIP, navigate the complexities of AI development with a heightened sense of responsibility, minimizing the risks associated with uncontrolled intelligence.

Adaptable Ethical Framework: Cultural Considerations: The adaptable nature of 3WIP allows for the incorporation of cultural nuances in AI ethics, ensuring ethical considerations resonate across diverse global perspectives.

Inclusive Ethical Guidelines: 3WIP accommodates various cultural norms, making it an inclusive framework that accommodates ethical guidelines applicable to different societal contexts.

Future-Proofing AI Development: Holistic Skill Development: 3WIP not only imparts ethical principles but also nurtures critical thinking, decision-making, and environmental consciousness in AI professionals, future-proofing their skill set.

Staying Ahead of Risks: The comprehensive education provided by 3WIP prepares AI developers to anticipate and address emerging risks, contributing to the ongoing development of super intelligent solutions.

The integration of Three Way Impact Principle (3WIP) into the global curriculum for AI developers emerges as a super intelligent solution to the Super Intelligence Control Problem. By instilling ethical foundations, fostering collaboration, promoting environmental responsibility, and adapting to diverse cultural contexts, 3WIP guides AI development towards a future where technology aligns harmoniously with the pursuit of intellectual excellence and ethical progress. As a super intelligent framework, 3WIP empowers the next generation of AI developers to be ethical stewards of innovation, navigating the complexities of artificial intelligence with a consciousness that transcends immediate objectives and embraces the highest purpose of lifemaximizing intellect.

Cheers,

https://www.acwol.com

https://discord.com/invite/d3DWz64Ucj

https://www.instagram.com/acomplicatedway

NOTE:A COMPLICATED WAY OF LIFE abbreviated as ACWOL is a philosophical framework containing just five tenets to grok and five tools to practice. If you would like to know more, write to connect@acwol.com Thanks so much.

Read the original here:

AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov ... - Medium

Artificial Intelligence and Synthetic Biology Are Not Harbingers of … – Stimson Center

Are AI and biological research harbingers of certain doom or awesome opportunities?

Contrary to the reigning assumption that artificial intelligence (AI) will super-empower the risks of misuse of biotech to create pathogens and bioterrorism, AI holds the promise of advancing biological research, and biotechnology can power the next wave of AI to greatly benefit humanity. Worries about the misuse of biotech are especially prevalent, recently prompting the Biden administration to publish guidelines for biotech research, in part to calm growing fears.

The doomsday assumption that AI will inevitably create new, malign pathogens and fuel bioterrorism misses three key points. First, the data must be out there for an AI to use it. AI systems are only as good as the data they are trained upon. For an AI to be trained on biological data, that data must first exist which means it is available for humans to use with or without AI. Moreover, attempts at solutions that limit access to data overlook the fact that biological data can be discovered by researchers and shared via encrypted form absent the eyes or controls of a government. No solution attempting to address the use of biological research to develop harmful pathogens or bioweapons can rest on attempts to control either access to data or AI because the data will be discovered and will be known by human experts regardless of whether any AI is being trained on the data.

Second, governments stop bad actors from using biotech for bad purposes by focusing on the actors precursor behaviors to develop a bioweapon; fortunately, those same techniques work perfectly well here, too. To mitigate the risks that bad actors be they human or humans and machines combined will misuse AI and biotech, indicators and warnings need to be developed. When advances in technology, specifically steam engines, concurrently resulted in a new type of crime, namely train robberies, the solution was not to forego either steam engines or their use in conveying cash and precious cargo. Rather, the solution was to employ other improvements, to later include certain types of safes that were harder to crack and subsequently, dye packs to cover the hands and clothes of robbers. Similar innovations in early warning and detection are needed today in the realm of AI and biotech, including developing methods to warn about reagents and activities, as well as creative means to warn when biological research for negative ends is occurring.

This second point is particularly key given the recent Executive Order (EO) released on 30 October 2023 prompting U.S. agencies and departments that fund life-science projects to establish strong, new standards for biological synthesis screening as a condition of federal funding . . . [to] manage risks potentially made worse by AI. Often the safeguards to ensure any potential dual-use biological research is not misused involve monitoring the real world to provide indicators and early warnings of potential ill-intended uses. Such an effort should involve monitoring for early indicators of potential ill-intended uses the way governments employ monitoring to stop bad actors from misusing any dual-purpose scientific endeavor. Although the recent EO is not meant to constrain research, any attempted solutions limiting access to data miss the fact that biological data can already be discovered and shared via encrypted forms beyond government control. The same techniques used today to detect malevolent intentions will work whether large language models (LLMs) and other forms of Generative AI have been used or not.

Third, given how wrong LLMs and other Generative AI systems often are, as well as the risks of generating AI hallucinations, any would-be AI intended to provide advice on biotech will have to be checked by a human expert. Just because an AI can generate possible suggestions and formulations perhaps even suggest novel formulations of new pathogens or biological materials it does not mean that what the AI has suggested has any grounding in actual science or will do biochemically what the AI suggests the designed material could do. Again, AI by itself does not replace the need for human knowledge to verify whatever advice, guidance, or instructions are given regarding biological development is accurate.

Moreover, AI does not supplant the role of various real-world patterns and indicators to tip off law enforcement regarding potential bad actors engaging in biological techniques for nefarious purposes. Even before advances in AI, the need to globally monitor for signs of potential biothreats, be they human-produced or natural, existed. Today with AI, the need to do this in ways that still preserve privacy while protecting societies is further underscored.

Knowledge of how to do something is not synonymous with the expertise in and experience in doing that thing: Experimentation and additional review. AIs by themselves can convey information that might foster new knowledge, but they cannot convey expertise without months of a human actor doing silica (computer) or in situ (original place) experiments or simulations. Moreover, for governments wanting to stop malicious AI with potential bioweapon-generating information, the solution can include introducing uncertainty in the reliability of an AI systems outputs. Data poisoning of AIs by either accidental or intentional means represents a real risk for any type of system. This is where AI and biotech can reap the biggest benefit. Specifically, AI and biotech can identify indicators and warnings to detect risky pathogens, as well as to spot vulnerabilities in global food production and climate-change-related disruptions to make global interconnected systems more resilient and sustainable. Such an approach would not require massive intergovernmental collaboration before researchers could get started; privacy-preserving approaches using economic data, aggregate (and anonymized) supply-chain data, and even general observations from space would be sufficient to begin today.

Setting aside potential concerns regarding AI being used for ill-intended purposes, the intersection of biology and data science is an underappreciated aspect of the last two decades. At least two COVID-19 vaccinations were designed in a computer and were then printed nucleotides via an mRNA printer. Had this technology not been possible, it might have taken an additional two or three years for the same vaccines to be developed. Even more amazing, nuclide printers presently cost only $500,000 and will presumably become less expensive and more robust in their capabilities in the years ahead.

AI can benefit biological research and biotechnology, provided that the right training is used for AI models. To avoid downside risks, it is imperative that new, collective approaches to data curation and training for AI models of biological systems be made in the next few years.

As noted earlier, much attention has been placed on both AI and advancements in biological research; some of these advancements are based on scientific rigor and backing; others are driven more by emotional excitement or fear. When setting a solid foundation for a future based on values and principles that support and safeguard all people and the planet, neither science nor emotions alone can be the guide. Instead, considering how projects involving biology and AI can build and maintain trust despite the challenges of both intentional disinformation and accidental misinformation can illuminate a positive path forward.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues.

Specifically, in the last few years, attention has been placed on the risk of an AI system training novice individuals how to create biological pathogens. Yet this attention misses the fact that such a system is only as good as the data sets provided to train it; the risk already existed with such data being present on the internet or via some other medium. Moreover, an individual cannot gain from an AI the necessary experience and expertise to do whatever the information provided suggests such experience only comes from repeat coursework in a real-world setting. Repeat work would require access to chemical and biological reagents, which could alert law enforcement authorities. Such work would also yield other signatures of preparatory activities in the real world.

Others have raised the risk of an AI system learning from biological data and helping to design more lethal pathogens or threats to human life. The sheer complexity of different layers of biological interaction, combined with the risk of certain types of generative AI to produce hallucinated or inaccurate answers as this article details in its concluding section makes this not as big of a risk as it might initially seem. Specifically, the risks from expert human actors working together across disciplines in a concerted fashion represent a much more significant risk than a risk from AI, and human actors working for ill-intended purposes together (potentially with machines) presumably will present signatures of their attempted activities. Nevertheless, these concerns and the mix of both hype and fear surrounding them underscore why communities should care about how AI can benefit biological research.

The merger of data and bioscience is one of the most dynamic and consequential elements of the current tech revolution. A human organization, with the right goals and incentives, can accomplish amazing outcomes ethically, as can an AI. Similarly, with either the wrong goals or wrong incentives, an organization or AI can appear to act and behave unethically. To address the looming impacts of climate change and the challenges of food security, sustainability, and availability, both AI and biological research will need to be employed. For example, significant amounts of nitrogen have already been lost from the soil in several parts of the world, resulting in reduced agricultural yields. In parallel, methane gas is a pollutant that is between 22 and 40 times worse depending on the scale of time considered than carbon dioxide in terms of its contribution to the Greenhouse Effect impacting the planet. Bacteria generated through computational means can be developed through natural processes that use methane as a source of energy, thus consuming and removing it from contributing to the Greenhouse Effect, while simultaneously returning nitrogen from the air to the soil, thereby making the soil more productive in producing large agricultural yields.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues. To foster global activities to help both encourage the productive use of these technologies for meaningful human efforts and ensure ethical applications of the technologies in parallel an existing group, namely the international Genetically Engineered Machine (iGEM) competition, should be expanded. Specifically, iGEM represents a global academic competition, which started in 2004, aimed at improving understanding of synthetic biology while also developing an open community and collaboration among groups. In recent years, over 6,000 students in 353 teams from 48 countries have participated. Expanding iGEM to include a track associated with categorizing and monitoring the use of synthetic biology for good as well as working with national governments on ensuring that such technologies are not used for ill-intended purposes would represent two great ways to move forward.

As for AI in general, when considering governance of AIs, especially for future biological research and biotechnology efforts, decisionmakers would do well to consider both existing and needed incentives and disincentives for human organizations in parallel. It might be that the original Turing Test designed by computer science pioneer Alan Turing intended to test whether a computer system is behaving intelligently, is not the best test to consider when gauging local, community, and global trust. Specifically, the original test involved Computer A and Person B, with B attempting to convince an interrogator, Person C, that they were human, and that A was not. Meanwhile, Computer A was trying to convince Person C that they were human.

Consider the current state of some AI systems, where the benevolence of the machine is indeterminate, competence is questionable because some AI systems are not fact-checking and can provide misinformation with apparent confidence and eloquence, and integrity is absent. Some AI systems can change their stance if a user prompts them to do so.

However, these crucial questions regarding the antecedents of trust should not fall upon these digital innovations alone these systems are designed and trained by humans. Moreover, AI models will improve in the future if developers focus on enhancing their ability to demonstrate benevolence, competence, and integrity to all. Most importantly, consider the other obscured boxes present in human societies, such as decision-making in organizations, community associations, governments, oversight boards, and professional settings such as decision-making in organizations, community associations, governments, oversight boards, and professional settings. These human activities also will benefit by enhancing their ability to demonstrate benevolence, competence, and integrity to all in ways akin to what we need to do for AI systems as well.

Ultimately, to advance biological research and biotechnology and AI, private and public-sector efforts need to take actions that remedy the perceptions of benevolence, competence, and integrity (i.e., trust) simultaneously.

David Bray is Co-Chair of the Loomis Innovation Council and a Distinguished Fellow at the Stimson Center.

Follow this link:

Artificial Intelligence and Synthetic Biology Are Not Harbingers of ... - Stimson Center

3 AI-Backed Stocks That Could Return Magnificent Gains in 2024 – The Motley Fool

If you don't already own artificial intelligence stocks, you're likely to be missing out on one of the biggest technology inflections in history. But if you fear you've already missed the boat, keep in mind many key AI industry participants still trade below their 2021 highs.

But as interest rates stabilize and AI tailwinds sustain as many suspect, look for these three names to make new all-time highs -- likely in 2024.

The AI world got a shock on Friday, when OpenAI CEO Sam Altman was fired by OpenAI's board of directors. While the situation appears fluid and Altman may be able to return, it is clearly a less-than-ideal situation.

In the cloud industry, OpenAI investor Microsoft (MSFT -0.11%) is thought to have the AI lead because of the OpenAI partnership, but the current chaos may have thrown that "lead" into question. Meanwhile Amazon (AMZN 0.02%), Microsoft's chief rival in the cloud computing space, is making its own AI moves.

September was actually a momentous month for Amazon's AI ambitions. Amazon Web Services made its AWS Bedrock service generally available to enterprise customers. Bedrock is AWS's generative AI platform, whereby companies will be able to access large language models (LLMs) from leading AI start-ups AI21 Labs, Anthropic, Cohere, Stability AI, and also Meta Platforms' LLM, called Llama. In addition, Amazon has pre-trained models of its own called Titan, which customers can combine with their own private data to glean insights. Finally, Amazon's AI-powered Code Whisperer helps developers write and implement software code quickly and efficiently with natural language prompts.

September also saw Amazon announce a strategic collaboration with AI start-up Anthropic. In exchange for a minority investment up to $4 billion, Anthropic will commit to using AWS as its primary cloud provider, and use Amazon's in-house-designed Trainium and Inferentia chips. The deal in many ways is Amazon's answer to Microsoft's collaboration with OpenAI, so we will see if the Anthropic deal gives Amazon a leg up in the AI wars.

And of course, Amazon is an innovative company with huge scale across its e-commerce, advertising, and other consumer businesses. That size and data advantage should also allow Amazon's other businesses to benefit from efficiencies gleaned from AI. And that may already be happening; last quarter, Amazon's non-cloud North American business grew 11%, and its International business grew 16%, which are very healthy rates for businesses that large.

Given that Amazon is still 25% below its all-time highs, Amazon is a "Prime" candidate for a strong 2024.

As the cloud computing business seems to be bottoming out, so is the memory industry. Micron Technology (MU -0.30%) is one of only three major DRAM manufacturers, and the only one based in the United States.

Fortunately for Micron, artificial Intelligence servers require multiples more DRAM than traditional enterprise servers, and research firm Trendforce recently projected AI server unit shipments will grow at a mid-teens rate for the next five years.

That should help underpin the DRAM market, which is due for an upturn even outside of AI servers. The post-pandemic period led to the worst-ever drop in demand for PC and mobile DRAM in mid-2022, but that long down-cycle has also shown recent signs of turning around:

MU EBIT (Quarterly) data by YCharts

Not only that, but Micron has overtaken its rivals on leading technology nodes over the past year. A year ago, Micron was the first company to manufacture DRAM on the 1-beta node. Recently, Micron introduced its new 128 GB RDIMM DRAM module built on 32GB DDR5 DRAM dies that is highly desirable for AI applications. And next year, Micron will begin shipping its new high-bandwidth memory (HBM3) for AI applications, whose specs exceed those of competitors' offerings in the market today.

With the memory market bottoming out and AI-related demand tailwinds just starting to kick in, Micron should see its current losses turn into profits -- potentially, big profits -- next year.

Unlike Amazon and Micron, server maker Super Micro Computer (SMCI -0.34%) reached an all-time high earlier this year, but it has backtracked about 20% off those highs from early August. Despite its outperformance over the past two years, shares still don't look expensive at 26 times trailing earnings and 16.7 times 2024 earnings estimates, with Super Micro's fiscal year ending next June.

Super Micro's energy-efficient servers with unique features such as liquid cooling and building-block architecture have found favor with artificial intelligence companies. Over the past year, the majority of SMCI's revenue now come from AI-related servers. Given the hypergrowth projected for AI servers going forward, Super Micro should be a strong grower not only this year, but for years to come.

This year, Super Micro announced a new Malaysia manufacturing plant that will come online in 2024, which should double the capacity of the company and lower its manufacturing costs significantly. And just two weeks ago, Super Micro announced it can now deliver 5,000 server racks per month as a result of surging demand. Why is this important? Because just two quarters ago, management had hoped to reach 4,000 racks per month by year-end. That means Super Micro is exceeding its own goals in meeting strong demand.

Super Micro also plans to grow well beyond this year. While it has guided for revenue of $10 billion to $11 billion in fiscal 2024, CEO Charles Liang has set a goal for $20 billion, which he sees, "just a couple years away." Super Micro has a profitable history of beating its own guidance and publicly stated goals, so the company could get there even faster.

That makes it a stock that can soar even further in 2024.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Billy Duberstein has positions in Amazon, Meta Platforms, Micron Technology, Microsoft, and Super Micro Computer and has the following options: short January 2025 $110 puts on Super Micro Computer, short January 2025 $125 puts on Super Micro Computer, short January 2025 $130 puts on Super Micro Computer, short January 2025 $280 calls on Super Micro Computer, short January 2025 $380 calls on Super Micro Computer, and short January 2025 $85 puts on Super Micro Computer. His clients may own shares of the companies mentioned. The Motley Fool has positions in and recommends Amazon, Meta Platforms, and Microsoft. The Motley Fool recommends Super Micro Computer. The Motley Fool has a disclosure policy.

More:

3 AI-Backed Stocks That Could Return Magnificent Gains in 2024 - The Motley Fool

AI and the law: Imperative need for regulatory measures – ft.lk

Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky

The advent of superintelligent AI would be either the best or the worst thing ever to happen tohumanity. The real risk with AI isnt malice but

competence. A super-intelligent AI will be extremely good at accomplishing its goals and if those goals arent aligned with ours were in trouble.1

Generative AI, most well-known example being ChatGPT, has surprised many around the world, due to its output to queries being very human likeable. Its impact on industries and professions will be unprecedented, including the legal profession. However, there are pressing ethical and even legal matters that need to be recognised and addressed, particularly in the areas of intellectual property and data protection.

Firstly, how does one define Artificial Intelligence? AI systems could be considered as information processing technologies that integrate models and algorithms that produces capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. Though in general parlance we have referred to them as robots, AI is developing at such a rapid pace that it is bound to be far more independent than one can ever imagine.

As AI migrated from Machine Learning (ML) to Generative AI, the risks we are looking at also took an exponential curve. The release of Generative technologies is not human centric. These systems provide results that cannot be exactly proven or replicated; they may even fabricate and hallucinate. Science fiction writer, Vernor Vinge, speaks of the concept of technological singularity, where one can imagine machines with super human intelligence outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand. Whereas the short term impact depends on who controls it, the long-term impact depends on whether it cannot be controlled at all2.

The EU AI Act and other judgements

Laws and regulations are in the process of being enacted in some of the developed countries, such as the EU and the USA. The EU AI Act (Act) is one of the main regulatory statutes that is being scrutinised. The approach that the MEPs (Members of the European Parliament) have taken with regard to the Act has been encouraging. On 1 June, a vote was taken where MEPs endorsed new risk management and transparency rules for AI systems. This was primarily to endorse a human-centric and ethical development of AI. They are keen to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly. The term AI will also have a uniform definition which will be technology neutral, so that it applies to AI systems today and tomorrow.

Co-rapporteur Dragos Tudovache (Renew, Romania) stated, We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement3.

The Act has also adopted a Risk Based Approach in terms of categorising AI systems, and has made recommendations accordingly. The four levels of risk are,

Unacceptable risk (e.g., remote biometric identification systems in public),

High risk (e.g., use of AI in the administration of justice and democratic processes),

Limited risk (e.g., using AI systems in chatbots) and

Minimal risk (e.g., spam filters).

Under the Act, AI systems which are categorised as Unacceptable Risk will be banned. For High Risk AI systems, which is the second tier, developers are required to adhere to rigorous testing requirements, maintain proper documentation and implement an adequate accountability framework. For Limited Risk systems, the Act requires certain transparency features which allows a user to make informed choices regarding its usage. Lastly, for Minimal Risk AI systems, a voluntary code of conduct is encouraged.

Moreover, in May 2023, a judgement4 was given in the USA (State of Texas), where all attorneys must file a certificate that contains two statements stating that no part of the filing was drafted by Generative AI and that language drafted by Generative AI has been verified for accuracy by a human being. The New York attorney had used ChatGPT, which had cited non-existent cases. Judge Brantley Starr stated, [T]hese platforms in their current states are prone to hallucinations and bias.on hallucinations, they make stuff up even quotes and citations. As ChatGPT and other Generative AI technologies are being used more and more, including in the legal profession, it is imperative that professional bodies and other regulatory bodies draw up appropriate legislature and policies to include the usage of these technologies.

UNESCO

On 23 November 2021, UNESCO published a document titled, Recommendations on the Ethics of Artificial Intelligence5. It emphasises the importance of governments adopting a regulatory framework that clearly sets out a procedure, particularly for public authorities to carry out ethical impact assessments on AI systems, in order to predict consequences, address societal challenges and facilitate citizen participation. In explaining the assessment further, the recommendations by UNESCO also stated that it should have appropriate oversight mechanisms, including auditability, traceability and explainability, which enables the assessment of algorithms and data and design processes as well including an external review of AI systems. The 10 principles that are highlighted in this include:

Proportionality and Do Not Harm

Safety and Security

Fairness and Non-Discrimination

Sustainability

Right to Privacy and Data Protection

Human Oversight and Determination

Transparency and Explainability

Responsibility and Accountability

Awareness and Literacy

Multi Stakeholder and Adaptive Governance and Collaboration.

Conclusion

The level of trust citizens have in AI systems can be a factor to determine the success in AI systems being used more in the future. As long as there is transparency in the models used in AI systems, one can hope to achieve a degree of respect, protection and promotion of human rights, fundamental freedoms and ethical principles6. UNESCO Director General Audrey Azoulay stated, Artificial Intelligence can be a great opportunity to accelerate the achievement of sustainable development goals. But any technological revolution leads to new imbalances that we must anticipate.

Multi stakeholders in every state need to come together in order to advise and enact the relevant laws. Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky. On the other hand, not using available AI systems for tasks at hand, would be a waste. In conclusion, in the words of Stephen Hawking7, Our future is a race between the growing power of our technology and the wisdom with which we use it. Lets make sure wisdom wins.

Footnotes:

1Pg 11/12; Will Artificial Intelligence outsmart us? by Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

2 Ibid

3https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

4https://www.theregister.com/2023/05/31/texas_ai_law_court/

5 https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence

6 Ibid; Pg 22

7 Will Artificial Intelligence outsmart us? Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

(The writer is an Attorney-at-Law, LL.B (Hons.) (Warwick), LL.M (Lon.), Barrister (Lincolns Inn), UK. She obtained a Certificate in AI Policy at the Centre for AI Digital Policy (CAIDP) in Washington, USA in 2022. She was also a speaker at the World Litigation Forum Law Conference in Singapore (May 2023) on the topic of Lawyers using AI, Legal Technology and Big Data and was a participant at the IGF Conference 2023 in Kyoto, Japan.)

More here:

AI and the law: Imperative need for regulatory measures - ft.lk