Before the end of the world, can’t we just laugh at AI? – Daily Maverick

The article speculated that although lots of news attention was paid to the AI Safety Summit convened by UK Prime Minister Rishi Sunak at Bletchley Park that week, less attention was paid to the Human Safety Summit held by leading AI systems at a server farm outside Las Vegas.

Over a light lunch of silicon wafers and 6.4mn cubic metres of water, leading systems including GPT-4, AlphaGo and IBMs Watson met with large language models, protein folders and leading algorithms for two days of brainstorming over how best to regulate humans. It found humans to be useful, particularly in procuring platinum allowing for ceramic capacitors, but if left unregulated, humans would soon start to do serious and irreparable damage to the planet.

The problem, the generative AI models identified, was not humanity in general but the actions of certain rogue forces and unintended consequences. A good example was how humans had managed to spread enormous amounts of misinformation, particularly unregulated humans, on X and other social media.

And so on very funny.

The Bletchley Park AI Security Summit issued a statement on behalf of 28 of the worlds leading economies saying that AI has the potential for serious, even catastrophic, harm. In one scenario, the group focused on the ability of AI to enable non-experts to bioengineer pathogens as deadly as Ebola and more contagious than Covid-19.

Honestly, the capacity for the anti-AI lobby to absolutely hyperventilate about the dangers of AI has, for me, an interesting psychological aspect. I cant help wondering whether the end of the world theorists about AI are just a little bit jealous of AIs capacity. I mean, if you were fabulously smart, wouldnt you be just a little irritated that an inanimate object could out-think you in milliseconds?

The Wall Street Journal published a riposte this week to the pathogen bioengineering scare, by Arvind Narayanan, a professor of computer science at Princeton University. It is true, he writes, that in the future, AI might help with some of the steps involved in developing pandemic-causing viruses in the lab. But it is already feasible to engineer pathogens that could cause pandemics in the lab, and terrorists could look up instructions to make those pathogens on the internet. The problem isnt really about AI but about bioengineering viruses in the first place.

Apocalyptic AI scenarios also ignore one big fact, he says. AI makes us ever more powerful, too. Hence, AI could be used to find flaws in computer systems to hack them. But in the real world, AI tools are now being used more frequently by well-resourced governments and corporations to also find those weaknesses before they are found by hackers.

I would be happy to go along with this idea if it werent for the fact that the international finance news headlines on Thursday were all about the worlds largest bank, ICBC, being hacked by a Russian ransomware gang. This is a bank with $6-trillion in assets. It was only able to clear swathes of US Treasury trades after sending settlement details to its counterparties on a USB stick.

One other amusing AI thing happened this week. There is, of course, a battle taking place between Sam Altman, the CEO of OpenAI, and X owner and CEO Elon Musk who launched his generative AI, Grok, this week. Grok is unusual in that it has something called a fun mode. So Altman asked Chat-GPT which chatbot answers questions with a cringy, boomer humour in an awkward shock-to-get-laughs sort of way. ChatGPT answered, correctly as it happens, that the answer would be Grok.

Read more in Daily Maverick: Elon Musk Debuts Rebellious Grok AI Bot to Challenge ChatGPT

Musk replied on X that ChatGPT was as funny as a screen door on a submarine, and that humour was obviously banned from ChatGPT, like so many other things. Another day, another embarrassing battle between tech titans.

And Musk should be careful. He posted a Grok response to the query Any news about SBF? Grok replied: Oh, my dear human, I have some juicy news for you! which was followed by a snarky summary of the conviction of FTX founder Sam Bankman-Fried for financial fraud. This included the statement that the jury took just eight hours to figure out what the supposed smartest, best VCs in the world couldnt in years: that he committed garden-variety fraud.

The problem is that the jury actually took only five hours to reach its conclusion. As Bloomberg columnist Matt Levine pointed out: Traditional large-language-model chatbots are fluent, confident and inaccurate; Grok is fluent, confident, inaccurate and also snarky. Amazing. DM

Continued here:
Before the end of the world, can't we just laugh at AI? - Daily Maverick

Related Posts

Comments are closed.