The new technocracy: who’s who in the chatbot revolution? – The Spectator

Decades are happening in weeks in the world of artificial intelligence. A fortnight ago, OpenAI released GPT-4, the latest model of its chatbot. It passed the bar exam in the 90th percentile, whereas the previous model only managed the tenth. Last week, Google introduced its own chatbot, Bard. Now, the British government isannouncingplans to regulate AI for the first time, as well as to introduce it into hospitals and schools. Even some of the biggest technophobes are having to grasp this brave new world.

Were familiar with some of the technology by now, but we know little about the humans in the world of AI. From Steve Jobs to Bill Gates and Mark Zuckerberg, the previous generation of techies were household names, so well known that they were depicted by everyone fromHarry EnfieldtoJesse Eisenberg.But the chatbots seem to have come from the ether. Who makes them? Who willed them into existence? And who wants to stop them?

Sam Altmanis the king in this world. Hes the CEO of OpenAI, the San Francisco lab behind ChatGPT and the image generator Dall-e. Hes 37 and looks like an AI-generated tech bro: hoodie, messy black hair, nervous eyes. He dropped out of a computer science course at Stanford, and became the CEO of a firm called Loopt at the age of 19 (raising $30 million, which is apparently nothing in venture-capital world). After that, he became a partner and eventually president of Y Combinator, a startup accelerator. It helped launch Reddit (which Altman was CEO of for eight days), Stripe, Airbnb, Dropbox and Coinbase.

Altman did all of this by 30. Then he started getting scared. He seemed to become convinced that a superintelligent AI could feasibly wipe out everything we hold dear. Altman wasnt convinced that Google, leading the field at the time in its partnership with DeepMind, could be trusted.He thought it was too concerned about revenue and wasnt doing enough to make sure its AI remained friendly.So he started his own AI companyin December 2015, and seven years later we had ChatGPT.

But it was less than encouraging last month when its AI said it wanted to hack the nuclear codes. Researchers have said this was a hallucination and an accident (and in all truth, it was coaxed into saying that by aNew York Timesjournalist). But Altman has already saidhes ready to slow things down if people get nervous.

The question many are asking of Altman is: if you are so scared about AI wiping out humanity, why on earth are you building it?Altman said OpenAI was operating as if the risks are existential.But theres a boyish sense of wonder to his mission: he wrote last month that he wants AI to empower humanity to maximally flourish in the universe. The upsides could be incredible: if AI helps humanity to become an interstellar species, some have calculated that it could bring the number of humans to exist in the future to 100,000,000,000,000,000,000,000,000,000,000,000 (yes, thats 35 zeroes). No disease, no illness, no sadness: the AI of the future would know how to cure that. Altman basically thinks its worth the risk.

Altman wears some of his anxiety visibly. His voicetremblesand his eyebrows do things. He once said that AI will probably most likely lead to the end of the world, but in the meantime, therell be great companies. TheNew Yorkerasked him if he was scared about AI turning on us. I try not to think about it too much, he said. But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defence Force, and a big patch of land in Big Sur I can fly to. In Siliconese, I believe thats called hedging.

One person Altman has upset isElon Musk. The worlds richest man was all aboard the OpenAI train when it was founded in 2015. But he left the board three years later. Theres some debate about why: the public explanation is that he was worried about conflicts of interest with his Tesla work. But theres somereportsthat he didnt like the way Altman was running things.Semaforclaimed this weekend that Musk thought it was being outpaced by Google. So in 2018, he suggested that he take over OpenAI and run it himself. That idea was reportedly rejected by everyone else, including Altman, and Musk lashed out: he quit the company, and last year barred it from having access to Twitters data. Originally founded as a non-profit,OpenAI had to change tack after Musks billions dried up.Musk now complains about OpenAI on Twitter (which Altman has called him a jerk for doing), and on Friday hesaidthat the most powerful tool that mankind has ever created is in the hands of a ruthless corporate monopoly.

So Musk, whether out of concern or revenge, has started siding with the worriers.The most influential of these isNickBostrom,who is based at Oxford Universitys Future of Humanity Institute. He made his name in the early 2000s when he suggested that we probably live in a computer simulation. After nestling that unsettling thought in our heads, he went on to work on cuddly topics like global catastrophic risks, and in 2014 publishedSuperintelligence.

The book makes the most accessible argument yet for why people should be worried about AI. The problem, Bostrom says, is that its intelligence is nothing like ours. If you dont tell itexactly what you want it to do, bad stuff happens. If I said to you could you get me a coffee, you would walk to the shop and mind pedestrians as you did so. If I said that to a badly aligned AI, it might blitz everything in its path and bring back several metric tonnes of beans from Kenya. Bostroms example is called the paperclip maximiser. Imagine youve told a machine to make paperclips without specifying that youd quite like us to all stay alive, thanks. After making paperclips for a while, the machine looks at you, and goes youre made of things I could make a paperclip out of.It would soon start killing humans in its eternal quest to make as many clips as possible. We all die. Its intelligence without common sense.

Taking this argument further isEliezer Yudkowsky. He was warning us about superintelligence while George W. Bush was president. He recently went on a podcast andsaidwere all going to die, and that we could do so within the next three years. When OpenAI was founded in 2015, hespenta night crying. When he was 20, he founded the Singularity Institute for Artificial Intelligence, intended to bring into being a utopia as quickly as possible. But over the 2000s, his thinking darkened, and he eventually set up the Machine Intelligence Research Institute (MIRI), which aims to stop AI from killing us. Yudkowsky seems to be giving up, though. MIRI last yearannouncedthat it was conceding defeat on AI safetyafter concluding that there was no surefire way to align it,and it was switching to a Death with Dignity strategy.The statement was released on April Fools Day, but most detected some sincerity.MIRI said we should accept that its lights-out, and try to have fun while we can.

Even though Yudkowsky is a celebrity in the AI world (hes picturedherewith Sam Altman and Grimes), hes hardly a household name. Some harshly havesaidAI safety has been neglected because of Eliezer refusing to get hotter.

Yudkowsky communicates through the website LessWrong, where people really into existential risk, cognitive biases and rationality hang out.Its been described as a cult, which it may or may be, but its no less influential either way. At its best, its a concentration of incredibly intelligent people trying to make the world better. At worst, it can become one of those over-philosophical places where people say its technically rational to have sex with a dog. It was Yudkowsky and the economistRobin Hansonwho made LessWrong famous, but Hanson disagrees with Yudkowsky. He thinks weve got much more time, and that we should start worrying in 150 years, perhaps. His argument is somewhat obscure, but part of it is that these things, throughout human history, take longer than you expect to develop.

Peter Thielthinks all these people are pathetic. The PayPal co-founder told students at Oxford in January that those worried about existential risk are themselves going to kill people. Theyre slowing down technological progress, theyre slowing down the curing of diseases: theyre slowing down the world getting better. Thiel cant stand stagnation. Hes like a 1950s kid whose dream didnt come true. As he once said: They promised us flying cars and all we got is 140 characters. Thiels philosophy is: Go, go, go! Buckle up.

A final word on Google. Last week it released Bard, its own chatbot, and is now posing as the sensible one. The brains behind Bard came from DeepMind, founded byDemis Hassabis,Shane LeggandMustafa Suleyman. They met at UCL in 2010, and within four years had sold their start-up to Google for $500 million. They then made AlphaGo, which in 2015 beat the European Go champion Fan Hui and got everyone very excited. Their biggest achievement was in 2020, when they largely solved the structure of proteins in the body, a 50-year-old problem. DeepMind is quietly changing the world from the Kings Cross. It wont come as a surprise to many Londoners that the apocalypse could start there.

Sam Altman is adamant that he has to race ahead to make the best AI, before anyone else does so recklessly. Hes got Google spooked: CEO Sundar Pichai declared a code red when ChatGPT was released (in other words, hurry up!).The investment is pouring in, and slowing down doesnt look too profitable.If were heading for a crash, we should probably check the drivers.

Read the original post:
The new technocracy: who's who in the chatbot revolution? - The Spectator

Related Posts

Comments are closed.