AI’s Illusion of Rapid Progress – Walter Bradley Center for Natural and Artificial Intelligence

The media loves to report on everything Elon Musk says, particularly when it is one of his very optimistic forecasts. Two weeks ago he said: If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years.”

In 2019, he predicted there would be a million robo-taxis by 2020 and in 2016, he said about Mars, “If things go according to plan, we should be able to launch people probably in 2024 with arrival in 2025.”

On the other hand, the media places less emphasis on negative news such as announcements that Amazon would abandon its cashier-less technology called “Just Walk Out, because it wasnt working properly. Introduced three years ago, the tech purportedly enabled shoppers to pick up meat, dairy, fruit and vegetables and walk straight out without queueing, as if by magic. That magic, which Amazon dubbed ‘Just Walk Out’ technology, was said to be autonomously powered by AI.

Unfortunately, it wasnt. Instead, the checkout-free magic was happening in part due to a network of cameras that were overseen by over 1,000 people in India who would verify what people took off the shelves. Their tasks included “manually reviewing transactions and labeling images from videos.

Why is this announcement more important than Musks prediction? Because so many of the predictions by tech bros such as Elon Musk are based on the illusion that there are many AI systems that are working properly, when they are still only 95% there, with the remaining 5% dependent on workers in the background. The obvious example is self-driving vehicles, which are always a few years away, even as many vehicles are controlled by remote workers.   

But self-driving vehicles and cashier-less technology are just the tip of the iceberg. A Gizmodo article listed about 10 examples of AI technology that seemed like they were working, but just werent.

A company named Presto Voice sold its drive-thru automation services, purportedly powered by AI, to Carls Jr, Chilis, and Del Taco, but in reality, Filipino offsite workers are required to help with over 70% of Prestos orders.

Facebook released a virtual assistant named M in 2015 that purportedly enabled AI to book your movie tickets, tell you the weather, or even order you food from a local restaurant. But it was mostly human operators who were doing the work.

There was an impressive Gemini demo in December of 2023 that showed how Geminis AI could allegedly decipher between video, image, and audio inputs in real-time. That video turned out to be sped up and edited so humans could feed Gemini long text and image prompts to produce any of its answers. Todays Gemini can barely even respond to controversial questions, let alone do the backflips it performed in that demo.

Amazon has offered a service for years called Mechanical Turk of which one service was Expensify in 2017 in which you could take a picture of a receipt and the app would automatically verify that it was an expense compliant with your employers rules, and file it in the appropriate location. In reality, Amazon used a team of secure technicians to file the expense on your behalf, who were often Amazon Mechanical Turk workers.

Twitter offered a virtual assistant in 2016 that had access to your calendar and could correspond with you over email. In reality, humans, posing as AI, responded to emails, scheduled meetings on calendars, and even ordered food for people.”

Google claims that AI is scanning your Gmail inbox for information to personalize ads, but in reality, humans are doing the work, and are seeing your private information.

In the last three cases, real humans were viewing private information such as credit card numbers, full names, addresses, food orders, and more.

Then there are the hallucinations that keep cropping up in the output from large-language models. Many experts claim that the lowest hallucination rates among tracked AI models are around 3 to 5%, and that they arent fixable because they stem from the LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts.

Every time you hear one of the tech bros talking about the future, keep in mind that they think large language models and self-driving vehicles already work almost perfectly. They have already filed away those cases as successfully done and they are thinking about whats next.

For instance, Garry Tan, the president and CEO of startup accelerator Y Combinator, claimed that Amazons cashier-less technology was:

“ruined by a professional managerial class that decided to use fake AI. Honestly it makes me sad to see a Big Tech firm ruined by a professional managerial class that decided to use fake AI, deliver a terrible product, and poison an entire market (autonomous checkout) when an earnest Computer Vision-driven approach could have reached profitable.

The president of Y Combinator should have known that humans were needed to make Amazons technology work, and many other AI systems. It is one of Americas most respected venture capital firms. It has funded around 4,000 startups and Sam Altman, currently CEO of OpenAI, was president of it between 2014 and 2019. For the president, Rodney Tan, to claim that Amazon could have succeeded if they had used real tech after many other companies have failed doing the same thing suggests he is either misinformed or lying.

So the next time you hear that AGI is imminent or jobs will soon be gone, remember that most of these optimistic predictions assume that Amazons cashierless technology, self-driving vehicles, and many other systems already work, when they are only 95 percent there, and the last five percent is the hardest.

In reality, those systems wont be done for years because the last few percentage points of work usually take as long as the first 95%. So what the media should be asking the tech bros about is how long will it take before those systems go from 95% successfully done autonomously to 99.99% or higher. Similarly, companies should be asking the consultants is when the 95% will become 99.99% because the rapid progress is an illusion.

Too many people are extrapolating from the systems that are purportedly automated, even though they arent yet working properly. This means that any extrapolations should attempt to understand when they will become fully automated, not just when those new forms of automated systems will begin to be used. Understanding whats going on in the background is important for understanding what the future will be in the foreground.

Read the original here:

AI's Illusion of Rapid Progress - Walter Bradley Center for Natural and Artificial Intelligence

Related Posts

Comments are closed.