Fantasy fears about AI are obscuring how we already abuse machine intelligence – The Guardian

Opinion

We blame technology for decisions really made by governments and corporations

Sun 11 Jun 2023 01.31 EDT

Last November, a young African American man, Randal Quran Reid, was pulled over by the state police in Georgia as he was driving into Atlanta. He was arrested under warrants issued by Louisiana police for two cases of theft in New Orleans. Reid had never been to Louisiana, let alone New Orleans. His protestations came to nothing, and he was in jail for six days as his family frantically spent thousands of dollars hiring lawyers in both Georgia and Louisiana to try to free him.

It emerged that the arrest warrants had been based solely on a facial recognition match, though that was never mentioned in any police document; the warrants claimed a credible source had identified Reid as the culprit. The facial recognition match was incorrect, the case eventually fell apart and Reid was released.

He was lucky. He had the family and the resources to ferret out the truth. Millions of Americans would not have had such social and financial assets. Reid, though, is not the only victim of a false facial recognition match. The numbers are small, but so far all those arrested in the US after a false match have been black. Which is not surprising given that we know not only that the very design of facial recognition software makes it more difficult to correctly identify people of colour, but also that algorithms replicate the biases of the human world.

Reids case, and those of others like him, should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI, and how it needs resetting. There has long been an undercurrent of fear of the kind of world AI might create. Recent developments have turbocharged that fear and inserted it into public discussion. The release last year of version 3.5 of ChatGPT, and of version 4 this March, created awe and panic: awe at the chatbots facility in mimicking human language and panic over the possibilities for fakery, from student essays to news reports.

Then, two weeks ago, leading members of the tech community, including Sam Altman, the CEO of OpenAI, which makes ChatGPT, Demis Hassabis, CEO of Google DeepMind, and Geoffrey Hinton and Yoshua Bengio, often seen as the godfathers of modern AI, went further. They released a statement claiming that AI could herald the end of humanity. Mitigating the risk of extinction from AI, they warned, should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

If so many Silicon Valley honchos truly believe they are creating products as dangerous as they claim, why, one might wonder, do they continue spending billions of dollars building, developing and refining those products? Its like a drug addict so dependent on his fix that he pleads for enforced rehab to wean him off the hard stuff. Parading their products as super-clever and super-powerful certainly helps massage the egos of tech entrepreneurs as well as boosting their bottom line. And yet AI is neither as clever nor as powerful as they would like us to believe. ChatGPT is supremely good at cutting and pasting text in a way that makes it seem almost human, but it has negligible understanding of the real world. It is, as one study put it, little more than a stochastic parrot.

We remain a long way from the holy grail of artificial general intelligence, machines that possess the ability to understand or learn any intellectual task a human being can, and so can display the same rough kind of intelligence that humans do, let alone a superior form of intelligence.

The obsession with fantasy fears helps hide the more mundane but also more significant problems with AI that should concern us; the kinds of problems that ensnared Reid and which could ensnare all of us. From surveillance to disinformation, we live in a world shaped by AI. A defining feature of the new world of ambient surveillance, the tech entrepreneur Maciej Ceglowski observed at a US Senate committee hearing, is that we cannot opt out of it, any more than we might opt out of automobile culture by refusing to drive. We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be.

The reason that Reid was wrongly incarcerated had less to do with artificial intelligence than with the decisions made by humans. The humans that created the software and trained it. The humans that deployed it. The humans that unquestioningly accepted the facial recognition match. The humans that obtained an arrest warrant by claiming Reid had been identified by a credible source. The humans that refused to question the identification even after Reids protestations. And so on.

Too often when we talk of the problem of AI, we remove the human from the picture. We practise a form of what the social scientist and tech developer Rumman Chowdhury calls moral outsourcing: blaming machines for human decisions. We worry AI will eliminate jobs and make millions redundant, rather than recognise that the real decisions are made by governments and corporations and the humans that run them. Headlines warn of racist and sexist algorithms, yet the humans who created the algorithms and those who deploy them remain almost hidden.

We have come, in other words, to view the machine as the agent and humans as victims of machine agency. It is, ironically, our very fears of dystopia, not AI itself, that are helping create a world in which humans become more marginal and machines more central. Such fears also distort the possibilities of regulation. Rather than seeing regulation as a means by which we can collectively shape our relationship to AI and to new technology, it becomes something that is imposed from the top as a means of protecting humans from machines. It is not AI but our sense of fatalism and our blindness to the way human societies are already deploying machine intelligence for political ends that should most worry us.

Kenan Malik is an Observer columnist

Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 250 words to be considered for publication, email it to us at observer.letters@observer.co.uk

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

More here:

Fantasy fears about AI are obscuring how we already abuse machine intelligence - The Guardian

Related Posts

Comments are closed.