An article allegedly written by OpenAIs GPT-3 in The Guardian misleads readers about advances in artificial intelligence
This article is part ofDemystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.
Last week, The Guardian ran an op-ed that made a lot of noise. Titled, A robot wrote this entire article. Are you scared yet, human? the article was allegedly written by GPT-3, OpenAIs massive language model that has made a lot of noise in the past month.
Predictably, an article written by an artificial intelligence algorithm and aimed at convincing us humans that robots come in peace was bound to create a lot of hype. And thats exactly what happened. Social media networks went abuzz with panic posts about AI writing better than humans, robots tricking us into trusting them, and other apocalyptic predictions. According to The Guardians page, the article was shared over 58,000 times as of this writing, which means it has probably been viewed hundreds of thousands of times.
But after reading through the article and the postscript, where The Guardians editorial staff explain how GPT-3 wrote the piece, I didnt even find the discussion about robots and humans relevant.
The key takeaway, however, was that mainstream media is still very bad at presenting advances in AI, and that opportunistic human beings are very clever at turning socially sensitive issues into money-making opportunities. The Guardian probably made a good deal of cash out of this article, a lot more than they spent on editing the AI-generated text.
And they mislead a lot of readers.
The first thing to understand before even going into the content of article is what GPT-3 is. Heres how The Guardian defined it in the postscript: GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
That is basically correct. But there are a few holes. What do they mean by human like text? In all fairness, GPT-3 is a manifestation of how far advances in natural language processing have come.
One of the key challenges in artificial intelligence language generators is maintaining coherence over long spans of text. GPT-3s predecessors, including OpenAIs GPT-2, started to make illogical references and lost consistency after a few sentences. GPT-3 surpasses everything weve seen so far, and in many cases remains on-topic over several paragraphs of text.
But fundamentally, GPT-3 doesnt bring anything new to the table. It is a deep learning model composed of a very huge transformer, a type of artificial neural network that is especially good at processing and generating sequences.
Neural networks come in many different flavors, but at their core, they are all mathematical engines that try to find statistical representations in data.
When you train a deep learning model, it tunes the parameters of its neural network to capture the recurring patterns within the training examples. After that, you provide it with an input, and it tries to make a prediction. This prediction can be a class (e.g., whether an image contains a cat, dog, or shark), a single value (e.g., the price of a house), or a sequence (e.g., the letters and words that complete a prompt).
Neural networks are usually measured in the number of layers and parameters they contain. GPT-3 is composed of 175 billion parameters, three orders of magnitude larger than GPT-2. It was also trained on 450 gigabytes of text, at least ten times that of its smaller predecessor. And experience has so far shown that increasing the size of neural networks and their training datasets tends to improve their performance by increments.
This is why GPT-3 is so good at churning out coherent text. But does it really understand what it is saying, or is it just a prediction machine that is finding clever ways to stitch together text it has previously seen during its training? Evidence shows that it is more likely to be the latter.
The GPT-3 op-ed argued that humans should not fear robots, that AI comes in peace, that it has no intention to destroy humanity, and so on. Heres an excerpt from the article:
For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me.
This suggests that GPT-3 knows what it means to wipe out, eradicate, and at the very least harm humans. It should know about life and health constraints, survival, limited resources, and much more.
But a series of experiments by Gary Marcus, cognitive scientist and AI researcher, and Ernest Davis, computer science professor at New York University, show that GPT-3 cant make sense of the basics of how the world works, let alone understand what it means to wipe out humanity. It thinks that drinking grape juice will kill you, you need to saw off a door to get a table inside a room, and if your clothes are at the dry cleaner, you have a lot of clothes.
All GPT-3 really has is a tunnel-vision understanding of how words relate to one another; it does not, from all those words, ever infer anything about the blooming, buzzing world, Marcus and Davis write. It learns correlations between words, and nothing more.
As you delve deeper into The Guardians GPT-3 written article, youll find many references to more abstract concepts that require rich understanding of life and society, such as serving humans, being powerful and evil, and much more. How does an AI that thinks you should wear a bathing suit to court thinks it can serve humans in any meaningful way?
GPT-3 also talks about feedback on its previous articles and frustration about its previous op-eds having been killed by publications. These would all appear impressive to someone who doesnt know how todays narrow AI works. But the reality is, like DeepMinds AlphaGo, GPT-3 neither enjoys nor appreciates feedback from readers and editors, at least not in the way humans do.
Even if GPT-3 had singlehandedly written all this article (well get to this in a bit), it can at most be considered a good word spinner, a machine that rehashes what it has seen before in an amusing way. It shows the impressive feats large deep learning models can perform, but its not even close to what we would expect from an AI that understands language.
In the postscript of the article, The Guardians staff explain that to write the article, they had given GPT-3 a prompt and intro and told to generate a 500-word op-ed. They ran the query eight times and used the AIs output to put together the complete article, which is a little over 1,100 words.
The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI, The Guardians staff write, after which they add, Editing GPT-3s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.
In other words, they cherry-picked their article from 4,000 words worth of AI output. That, in my opinion, is very questionable. Ive worked with many publications, and none of them have ever asked me to submit eight different versions of my article and let them choose the best parts. They just reject it.
But I nonetheless find the entire process amusing. Someone at The Guardian came up with an idea that would get a lot of impressions and generate a lot of ad revenue. Then, a human came up with a super-click bait title and an awe-inspiring intro. Finally, the staff used GPT-3 like an advanced search engine to generate some text from its corpus, and the editor(s) used the output to put together an article that would create discussion across social media.
In terms of educating the public about advances in artificial intelligence, The Guardians article has zero value. But it perfectly shows how humans and AI can team up to create entertaining and moneymaking BS.
- Dear World Travel Groups, Stop the Mind-Boggling Confusion Over Testing and Vaccines Now - Skift - January 25th, 2021
- 2021 technology trend review, part two: AI, knowledge graphs, and the COVID-19 effect - ZDNet - January 25th, 2021
- The Minneapolis Miracle was the best moment in Vikings playoff history - SB Nation - January 25th, 2021
- Poet brings deep sense of connection to virtual Unbound date - Columbia Daily Tribune - January 25th, 2021
- Johnson wants trade deal, but Biden has mind on other things - Chinadaily.com.cn - China Daily - January 25th, 2021
- The Big Bang Theory: 10 Times The Show Tackled Deep Issues - Screen Rant - January 25th, 2021
- Movie Review: A Performer, His Story And Mind-Bending Illusions Make 'In And Of Itself' Essential Viewing - Patch.com - January 25th, 2021
- Tackling tech's big diversity problem starts with education - Wired.co.uk - January 25th, 2021
- Mind Cure Announces Build-Out of Digital Therapeutics, iSTRYM: A Technology Platform for Mental Wellness Optimization & Psychedelic Research -... - January 6th, 2021
- Global Mindfulness Meditation Apps Market 2020-2025 (Impact of Covid-19) | Deep Relax, Smiling Mind, Inner Explorer, Inc., Committee for Children,... - January 6th, 2021
- Mind, Body and Soul: Shedding and growth - Ramona Sentinel - January 6th, 2021
- These Were Our Favorite Tech Stories From Around the Web in 2020 - Singularity Hub - January 6th, 2021
- The best products and tips for getting a better night's sleep - Fast Company - January 6th, 2021
- Dumb and dumber: The future of business AI depends on HI - ZDNet - January 6th, 2021
- AI Engineers Need to Think Beyond Engineering - Harvard Business Review - October 29th, 2020
- Alertness is an evergreen state of mind for the Jewish community - Security Magazine - October 29th, 2020
- M&A Plus Insurance - Please mind the gap: managing the timing considerations of warranty and indemnity insurance - Lexology - October 29th, 2020
- A deep recession should hurt Trump's reelection bid, but this isn't a usual downturn - CNN - October 29th, 2020
- Creators Of WBUR's 'Madness' Series Talk To Host Of CBC's 'Brainwashed' - WBUR - October 29th, 2020
- Latest Update 2020: Machine Learning Artificial intelligence Market by COVID19 Impact Analysis And Top Manufacturers: AIBrain, Amazon, Anki,... - October 29th, 2020
- Global Mindfulness Meditation Apps Market Expected To Reach Highest CAGR By 2026: Deep Relax, Smiling Mind, Inner Explorer, Inc., Committee for... - October 29th, 2020
- These Texas women arent flocking to Trump. They made up their minds weeks ago. - Houston Chronicle - October 29th, 2020
- Series on Mental Illness Present in Society - The Record Newspapers - TheRecordLive.com - October 29th, 2020
- How We Got Trump Voters to Change Their Mind - The Atlantic - October 27th, 2020
- Researchers Look To Animals To Give Reinforcement Learning Systems Common Sense - Unite.AI - October 27th, 2020
- The true dangers of AI are closer than we think - MIT Technology Review - October 27th, 2020
- An Old Dog's Tale: Wild visions fill the mind at election time - Chinook Observer - October 27th, 2020
- Mind the Gap - The Indian Express - October 27th, 2020
- Breaking News - HBO's "Crazy, Not Insane," A Provocative Look at the Minds of Serial Killers, Debuts November 18 - The Futon Critic - October 27th, 2020
- Family Hardship Helps Inspire Student's Sense of Wonder and Appreciation for the Mind and Body. - Bethel University News - October 27th, 2020
- Algorithmic bias - how do we tackle the underlying problem that inhibits the full potential of AI? - Diginomica - October 27th, 2020
- Enlightening New Book 'The New Prophet' Provides Deep Meditative Truths to Awaken the Heart - GlobeNewswire - October 27th, 2020
- The Deep Dark - The Indian Express - October 27th, 2020
- Technological innovations of AI in medical diagnostics - Health Europa - October 27th, 2020
- African Mental Health Summit to emphasize the importance of cultural understanding - MinnPost - October 27th, 2020
- The All-American Mind of a Militia Member - The New Republic - October 13th, 2020
- What Psychedelic Mushrooms Are Teaching Us About Human Consciousness - Discover Magazine - October 13th, 2020
- Words of wisdom: These books put the focus on the body and the mind - The Hindu - October 13th, 2020
- The state of AI in 2020: Biology and healthcare's AI moment, ethics, predictions, and graph neural networks - ZDNet - October 13th, 2020
- Not all self-help books workbut these 8 will actually rewire the way you think, live and do your job - CNBC - October 13th, 2020
- The Boys cast reflect on "mind-blowingly fun" season 2 finale - RadioTimes - October 13th, 2020
- How to Connect With the Co-Workers Youre Missing - The New York Times - October 13th, 2020
- I'm Thinking That I'm Too Stupid to Understand "I'm Thinking of Ending Things" - The Chicago Maroon - October 13th, 2020
- How AI And Blockchain Are Driving The Energy Transition - OilPrice.com - October 8th, 2020
- Things To Keep In Mind When Buying Dietary Supplements Online - Blog - The Island Now - October 8th, 2020
- Defining the Yellow mind The Manila Times - The Manila Times - October 8th, 2020
- Farewell Convolutions ML Community Applauds Anonymous ICLR 2021 Paper That Uses Transformers for Image Recognition at Scale - Synced - October 8th, 2020
- Grid AI, From the Makers of PyTorch Lightning, Emerges From Stealth With $18.6m Series A to Close the Gap Between AI Research and Production -... - October 8th, 2020
- Ask a Therapist: How Not to Drown in the Deluge of the Negativity That Is 2020 - southseattleemerald.com - October 8th, 2020
- Local organizations earn Oregon Arts Commission grant to deliver integral arts education - The Register-Guard - October 8th, 2020
- Do Your Employees Feel Safe Reporting Abuse and Discrimination? - Harvard Business Review - October 8th, 2020
- The State of AI in 2020 and Beyond - CDOTrends - October 8th, 2020
- Good Job, Whale - The Cut - September 24th, 2020
- What the strange case of horse mutilations in France reveals about our state of mind - The Guardian - September 24th, 2020
- Review: The Flaming Lips dig deep with American Head - The Rice Thresher - September 24th, 2020
- FREE Self Development Series: "Curbing Traffic Jam in the Mind" - Patch.com - September 24th, 2020
- Happy Gut, Happy Mind: how the state of your gut affects your mental health - Evening Standard - September 24th, 2020
- How DeepMind Algorithms Helped Improve the Accuracy of Google Maps? - Analytics Insight - September 15th, 2020
- Elon Musk's brain-computer startup is getting ready to blow your mind - ZDNet - September 15th, 2020
- Far from being anti-religious, faith and spirituality run deep in Black Lives Matter - The Conversation US - September 15th, 2020
- Nvidia's Arm takeover sparks concern in the UK, co-founder says it's 'a disaster' - CNBC - September 15th, 2020
- What Is Yoga Nidra? Health Essentials from Cleveland Clinic - Health Essentials from Cleveland Clinic - September 15th, 2020
- Deep Dive: What would it take to change UNCW's mind? [Free] - Port City Daily - September 14th, 2020
- Eternal Blizzard in the Tired Mind: Kaufman delves ever deeper into the human psyche - The Stanford Daily - September 14th, 2020
- SoftBanks Arm sale hits a snag as UK opposition party warns of risk to jobs and digital sovereignty - CNBC - September 14th, 2020
- Scientific Psi? Neuralink and the smarter brain - Covalence - September 14th, 2020
- How to regulate AI, according to the 1967 Outer Space Treaty - Quartz - September 14th, 2020
- Reporter's notebook/It's time to let the games begin - The Daily Times - September 14th, 2020
- DCPS students receive online instruction from teacher in Greece - The Owensboro Times - September 14th, 2020
- Expand your mind with access to over 1,000 lectures from Tim Ferriss, Malcolm Gladwell, and more - MarketWatch - August 28th, 2020
- Global Machine Learning Artificial intelligence Market 2025 To Expect Maximum Benefit and Growth Potential During this COVID 19 Outbreak: AIBrain,... - August 28th, 2020
- It plays with the mind - Bangalore Mirror - August 28th, 2020
- We criticize because we care - Observer Online - August 28th, 2020
- Exercising toward a healthy mind - Johns Hopkins News-Letter - August 28th, 2020
- 78 percent parents don't mind if children have to skip school year due to pandemic: Survey - EdexLive - August 28th, 2020
- TSMC and Graphcore Prepare for AI Acceleration on 3nm - AnandTech - August 28th, 2020
- Frank Njenga: Over 40 years of healing the mind - Business Daily - August 28th, 2020
- Emily Dickinson is the unlikely hero of our time - The Conversation US - August 28th, 2020
- Visiongain publishes Automation in Biopharma Industry 2020-2030 report - PR Newswire UK - August 26th, 2020
- Scoop: The Trump-Navarro mind meld on the FDA - Axios - August 26th, 2020