This Researcher Says AI Is Neither Artificial nor Intelligent – WIRED

Technology companies like to portray artificial intelligence as a precise and powerful tool for good. Kate Crawford says that mythology is flawed. In her book Atlas of AI, she visits a lithium mine, an Amazon warehouse, and a 19th-century phrenological skull archive to illustrate the natural resources, human sweat, and bad science underpinning some versions of the technology. Crawford, a professor at the University of Southern California and researcher at Microsoft, says many applications and side effects of AI are in urgent need of regulation.

Crawford recently discussed these issues with WIRED senior writer Tom Simonite. An edited transcript follows.

WIRED: Few people understand all the technical details of artificial intelligence. You argue that some experts working on the technology misunderstand AI more deeply.

KATE CRAWFORD: It is presented as this ethereal and objective way of making decisions, something that we can plug into everything from teaching kids to deciding who gets bail. But the name is deceptive: AI is neither artificial nor intelligent.

Buy this book at:

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

AI is made from vast amounts of natural resources, fuel, and human labor. And it's not intelligent in any kind of human intelligence way. Its not able to discern things without extensive human training, and it has a completely different statistical logic for how meaning is made. Since the very beginning of AI back in 1956, weve made this terrible error, a sort of original sin of the field, to believe that minds are like computers and vice versa. We assume these things are an analog to human intelligence, and nothing could be further from the truth.

You take on that myth by showing how AI is constructed. Like many industrial processes it turns out to be messy. Some machine learning systems are built with hastily collected data, which can cause problems like face recognition services more error prone on minorities.

We need to look at the nose to tail production of artificial intelligence. The seeds of the data problem were planted in the 1980s, when it became common to use data sets without close knowledge of what was inside, or concern for privacy. It was just raw material, reused across thousands of projects.

This evolved into an ideology of mass data extraction, but data isnt an inert substanceit always brings a context and a politics. Sentences from Reddit will be different from those in kids books. Images from mugshot databases have different histories than those from the Oscars, but they are all used alike. This causes a host of problems downstream. In 2021, there's still no industry-wide standard to note what kinds of data are held in training sets, how it was acquired, or potential ethical issues.

You trace the roots of emotion recognition software to dubious science funded by the Department of Defense in the 1960s. A recent review of more than 1,000 research papers found no evidence a persons emotions can be reliably inferred from their face.

Emotion detection represents the fantasy that technology will finally answer questions that we have about human nature that are not technical questions at all. This idea thats so contested in the field of psychology made the jump into machine learning because it is a simple theory that fits the tools. Recording people's faces and correlating that to simple, predefined, emotional states works with machine learningif you drop culture and context and that you might change the way you look and feel hundreds of times a day.

That also becomes a feedback loop: Because we have emotion detection tools, people say we want to apply it in schools and courtrooms and to catch potential shoplifters. Recently companies are using the pandemic as a pretext to use emotion recognition on kids in schools. This takes us back to the phrenological past, this belief that you detect character and personality from the face and the skull shape.

You contributed to recent growth in research into how AI can have undesirable effects. But that field is entangled with people and funding from the tech industry, which seeks to profit from AI. Google recently forced out two respected researchers on AI ethics, Timnit Gebru and Margaret Mitchell. Does industry involvement limit research questioning AI?

View post:
This Researcher Says AI Is Neither Artificial nor Intelligent - WIRED

Related Post

Comments are closed.