We still cant make a car that drives itself its unlikely our artificial intelligence creations will take over the world – Toronto Star

What is a mind?

I must admit, I never expected there would be any reason to ask that question outside of a philosophy class.

But the hype and fear around artificial intelligence has grown to a fever pitch, making that question and others like it suddenly worth pondering.

What is a superintelligence? What is an identity? Do computers have egos?

This sudden contemplative turn is a result of a drumbeat of doom and hype about AI. Its going to change everything, we are told including possibly ending all life on Earth, apparently.

Consider: Half of AI researchers surveyed last summer believe there is a 10 per cent chance AI will lead to human extinction. OpenAIs Sam Altman, one of AIs most prominent proponents, says he worries it might end the world.

And as if that werent enough, last week an open letter signed by a long list of people that includes tech leaders Elon Musk, Andrew Yang and Steve Wozniak stated that its time to take a six-month pause on AI research to consider the risks.

Suddenly, weve gone from artificial intelligence being a sci-fi trope to being the source of some very public and very extreme fear.

Click to expand

As with any claim made by the capitalist class, some skepticism is warranted. How better to hype your new product than claim it is all-powerful?

But if a rare call by technologists to actually think about consequences is at least a little refreshing, the claims of both AIs doomsayers and its proponents border on the absurd.

The fear expressed in the letter reflects a broader trend in which ideas about the threat posed by artificial intelligence are optimistic at best and spurious at worst. AI is not about to lead to human extinction. And to understand why, one needs to answer some of those strangely abstract questions about minds, intelligence and identity that are nonetheless vital.

According to signatories of this letter, AI systems with human-competitive intelligence can pose profound risks to society and humanity. Those risks include things youd expect, like AI replacing jobs, to the more grandiose: non-human minds that might eventually outnumber, outsmart, obsolete and replace us, which represents a profound change in the history of life on Earth.

Here is the idea behind it: artificial intelligence very rapidly evolves to become sentient and is able to make decisions according to its own wishes. As it exponentially scales up in capability, it becomes an impossibly evolved mind, and in its superintelligent wisdom could, on a whim, simply wipe us all out.

This is wildly far-fetched. A mind is the product of will, of ego. When we act we, we do so not simply out of the programming of our ideology or our values, but also out of desire. While an intelligent software could act independently, it will never act intentionally because it has no identity from which to act.

There is also the more vexing question of what superintelligence actually might be. It bears asking what some combination of math and logic and synthesis might specifically produce that is so beyond the realm of imagining that it will revolutionize the world.

The assumption of a radical superintelligence misunderstands both what intelligence is and also what causes problems in the world. It isnt a lack of intelligence that has children starving, a housing crisis in countless cities, or climate change. It is, rather, politics it is how, when and where people and technology are deployed to address issues.

It betrays a blinkered view of life in which we simply arent smart enough to fix our problems. What is in fact true is that we are stuck in the issues of real life lived by real people, and as a result are mired in politics, history, culture.

Its the same myopic mentality from which some make claims about a coming human extinction. If we jump to the extreme example, a superintelligent AI might only launch the nukes if it has in fact been structured and allowed to do that. That is: whatever artificial intelligence becomes, it is up to humans to decide when and where it is used and to what ends it is put.

But then, even all this discussion is itself premature. Take another hugely complex software problem, the self-driving car. For years we were told they were just around the corner that is, until the people involved realized just how complicated the issue is and started saying that we are perhaps decades away from a fully autonomous car.

Are we to believe, then, that we cannot make a car that drives itself, but we can become gods and create a non-human mind?

It is not that artificial intelligence will not be transformative. The capacity to outsource the analysis and synthesis of data to technology will have both deep and broad effects.

But the doomsaying about AI is as much marketing as anything else just a lot of chatter about intelligence and minds from some very clever people who appear to have spent too little time thinking about what those things actually are.

Read more from the original source:
We still cant make a car that drives itself its unlikely our artificial intelligence creations will take over the world - Toronto Star

Related Posts

Comments are closed.