We Have No Chance of Controlling a Superintelligent AI – Medium

If an AI superintelligence becomes more capable than the brightest human minds, how could we ever hope to control it?

Sign up for MachineWatch, a weekly newsletter for rational humans with reasonable concerns about AI.

Now that weve checked the box for developing a generative AI, many experts believe we could see a superintelligent AI within our lifetimes. Thats one of the next stages in the ongoing evolution of artificial intelligence.

And it sounds great, at first blush. By its very nature, a superintelligent AI would also be super useful and super powerful and generally super awesome.

But it could also be super difficult, if not impossible, for humans to control and super dangerous as a result. Thats the warning call from a growing chorus of AI researchers and even some who work in that industry.

Well get to those warnings in a moment. But first, lets establish a definition of the term superintelligence, within the context of AI.

There is no official definition of AI superintelligence. But most experts define it in the same basic way. Heres one commonly used definition, borrowed from Wikipedia:

Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds.

University of Oxford philosopher Nick Bostrom (and the author of a book on this subject) defines superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.

The New York City-based nonprofit A.I. for Anyone offers a similar but broader definition:

Superintelligence is a term used to describe a hypothetical future artificial intelligence (AI) that is significantly smarter than the best human minds in every field, including scientific creativity, general wisdom and social skills.

Lets look at one final definition, just to make sure weve nailed it down. I asked the generative AI chatbot ChatGPT: What is superintelligence within the context of AI?

Here is its reply:

Superintelligence refers to a hypothetical level of artificial intelligence (AI) that surpasses human intelligence in almost every aspect. Its a concept often discussed in the field of AI and is associated with the idea that once we achieve superintelligence, AI systems could outperform humans in a wide range of cognitive tasks, including problem-solving, creativity, learning, and decision-making.

All of these definitions and countless others that can be found online have three main features in common:

Artificial intelligence experts, researchers and developers typically label superintelligence as a hypothetical and future capability of AI. But hypothetical does not mean impossible. On the contrary, a growing number of AI-savvy individuals believe we could see a superintelligent AI in the near future.

So its critical that we have these discussions now, before reaching what would essentially be a point of no return.

Most AI experts that Ive read (and Ive read a lot) seem to agree that machines will eventually rival and possibly surpass human intelligence. But theres little consensus or agreement on when that might actually happen. Many believe it will happen by 2050. Some predict we could see an AI superintelligence within the next 10 years.

In a May 2023 article entitled Governance of superintelligence, ChatGPTs parent company OpenAI wrote the following:

Given the picture as we see it now, its conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of todays largest corporations.

Geoffrey Hinton, one of the so-called Godfathers of Deep Learning, resigned from his AI research position at Google to warn people about the technology. He too believes that a superintelligent AI is closer than previously thought:

The idea that this stuff could actually get smarter than people a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

Experts also agree that a superintelligent AI, by definition, would be smarter than its human creators. A lot smarter.

It would be able to perform cognitive and creative tasks at the intellectual equivalent of lightspeed. It would be able to learn and teach itself new capabilities at a pace that puts the human mind to shame.

And because of that, we come to the third point of agreement among the various definitions of AI super intelligence

Due to its higher and more adaptive level of intelligence, a superintelligent AI would be able to outperform humans in a variety of cognitive tasks, including critical thinking, problem-solving, learning and decision-making.

A superintelligent AI would be able to rapidly learn and understand new concepts, eventually exceeding the collective intelligence of humanity. It could potentially master all human and scientific knowledge, something that a human could never do.

It could out us in every way: outsmart, outthink, outperform and outmaneuver.

And thats concerning.

If a superintelligent AI were to remain benign and helpful at all times, its powerful intelligence could benefit humanity in many ways. But if it decided to pursue objectives that were not aligned with human preferences (a leading concern among AI researchers), the results could be catastrophic.

All of the above begs the question: How could we possibly control a superintelligence thats so much smarter than us? How could we hope to contain, supervise or manage it?

Given the current state of artificial intelligence capabilities, the concept of developing a superintelligent AI falls within the realm of possibility. But the idea of controlling, reversing or undoing such a superintelligence seems like an impossible task, in theory.

This is a question we just cant answer in the present.

We can speculate and theorize about it. We can use our logic and reason to envision some futuristic scenario based on our current understanding. But the cold hard truth is that we have no way of answering this question because we have no past experience or models to draw from.

But common sense answers the question for us.

Common sense tells us that a superintelligent AI, by its very nature, would be difficult if not impossible for humans to control. As an entity with superior intelligence, it would likely be able to circumvent any human efforts to contain it. And that could have dire consequences.

Nick Bostrom, a philosopher from the University of Oxford who specializes in existential risks, warned about the dangers of superintelligent machines in his book, Superintelligence: Paths, Dangers, Strategies. Bostrom equates superintelligent AIs with a ticking bomb thats bound to detonate at some point.

Superintelligence is a challenge for which we are not ready now and will not be ready for a long time, Bostrom writes. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.

In a 2021 article published in the Journal of Artificial Intelligence Research, entitled Superintelligence Cannot Be Contained: Lessons from Computability Theory, the authors explained why humans have little to no chance of containing a superintelligent AI:

A superintelligence poses a fundamentally different problem than those typically studied under the banner of robot ethics. This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable.

In a December 2023 article in Quanta Magazine, Santa Fe Institute professor Melanie Mitchell said we have reached a kind of tipping point, with regard to AI-related fears and concerns:

Its a familiar trope in science fiction humanity being threatened by out-of-control machines who have misinterpreted human desires. Now a not-insubstantial segment of the AI research community is deeply concerned about this kind of scenario playing out in real life.

Shes right about the sci-fi aspect of it. From HAL 9000 to Skynet, science fiction has frequently explored the concept of rogue machines that disregard human life. I myself have written about murderous androids pursuing their own prerogatives, in one of my novels.

Mitchell goes on to add that universities around the world and at major AI companies are already researching efforts on alignment, to make sure these technologies dont get out of control. But are we comfortable letting such a small and insular group make all of these important decisions on our behalf? Are we confident that they can protect us?

We can probably all agree that killer robots will remain confined to the pages of science fiction for the time being. But Mitchells article also points out that a not-insubstantial number of AI researchers are growing more and more concerned with the concept of intelligent machines pursuing objectives that might be harmful to humans.

James Barrat, a documentary filmmaker and author of the nonfiction book, Our Final Invention: Artificial Intelligence and the End of the Human Era, believes humans would be doomed to a life of servitude if machines develop a superior form of intelligence:

We humans steer the future not because were the strongest beings on the planet, or the fastest, but because we are the smartest. So when there is something smarter than us on the planet, it will rule over us on the planet.

Its hard to argue an outlook thats built on such simple logic.

Heres a little analogy that illustrates why it would be next to impossible for humans to control an artificial superintelligence.

Imagine youre a novice cybersecurity specialist. Youve taken a few classes, completed a few projects. Earned a certificate from your local junior college. You just started working an entry-level security job, where you hope to develop your beginner-level skills.

Your first assignment is to develop an impenetrable firewall to protect your companys network from the worlds greatest hacker. This hacker has skills and abilities that far surpass those of other humans and makes network penetration look like childs play, operating at an almost superhuman level.

Who do you think will come out on top in this scenario? The newbie security specialist, or the godlike hacker?

By definition, a superintelligent AI would be able to surpass humans in a broad range of activities. So its logical to assume that it could run circles around even the smartest human programmers. It would be like watching a toddler play chess against a grandmaster.

Whatever safety protocols or guardrails we create, an AI superintelligence would anticipate them well in advance and possibly neutralize them, if it felt they challenged its own agenda.

In closing, Id like to leave you with a quote from the theoretical physicist Stephen Hawking:

The real risk with AI isnt malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals arent aligned with ours, were in trouble.

Hawking made this statement as part of a Reddit ask-me-anything (AMA) event, back in 2015. He always was ahead of his time.

Read the original here:

We Have No Chance of Controlling a Superintelligent AI - Medium

Related Posts

Comments are closed.