An article allegedly written by OpenAIs GPT-3 in The Guardian misleads readers about advances in artificial intelligence
This article is part ofDemystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.
Last week, The Guardian ran an op-ed that made a lot of noise. Titled, A robot wrote this entire article. Are you scared yet, human? the article was allegedly written by GPT-3, OpenAIs massive language model that has made a lot of noise in the past month.
Predictably, an article written by an artificial intelligence algorithm and aimed at convincing us humans that robots come in peace was bound to create a lot of hype. And thats exactly what happened. Social media networks went abuzz with panic posts about AI writing better than humans, robots tricking us into trusting them, and other apocalyptic predictions. According to The Guardians page, the article was shared over 58,000 times as of this writing, which means it has probably been viewed hundreds of thousands of times.
But after reading through the article and the postscript, where The Guardians editorial staff explain how GPT-3 wrote the piece, I didnt even find the discussion about robots and humans relevant.
The key takeaway, however, was that mainstream media is still very bad at presenting advances in AI, and that opportunistic human beings are very clever at turning socially sensitive issues into money-making opportunities. The Guardian probably made a good deal of cash out of this article, a lot more than they spent on editing the AI-generated text.
And they mislead a lot of readers.
The first thing to understand before even going into the content of article is what GPT-3 is. Heres how The Guardian defined it in the postscript: GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
That is basically correct. But there are a few holes. What do they mean by human like text? In all fairness, GPT-3 is a manifestation of how far advances in natural language processing have come.
One of the key challenges in artificial intelligence language generators is maintaining coherence over long spans of text. GPT-3s predecessors, including OpenAIs GPT-2, started to make illogical references and lost consistency after a few sentences. GPT-3 surpasses everything weve seen so far, and in many cases remains on-topic over several paragraphs of text.
But fundamentally, GPT-3 doesnt bring anything new to the table. It is a deep learning model composed of a very huge transformer, a type of artificial neural network that is especially good at processing and generating sequences.
Neural networks come in many different flavors, but at their core, they are all mathematical engines that try to find statistical representations in data.
When you train a deep learning model, it tunes the parameters of its neural network to capture the recurring patterns within the training examples. After that, you provide it with an input, and it tries to make a prediction. This prediction can be a class (e.g., whether an image contains a cat, dog, or shark), a single value (e.g., the price of a house), or a sequence (e.g., the letters and words that complete a prompt).
Neural networks are usually measured in the number of layers and parameters they contain. GPT-3 is composed of 175 billion parameters, three orders of magnitude larger than GPT-2. It was also trained on 450 gigabytes of text, at least ten times that of its smaller predecessor. And experience has so far shown that increasing the size of neural networks and their training datasets tends to improve their performance by increments.
This is why GPT-3 is so good at churning out coherent text. But does it really understand what it is saying, or is it just a prediction machine that is finding clever ways to stitch together text it has previously seen during its training? Evidence shows that it is more likely to be the latter.
The GPT-3 op-ed argued that humans should not fear robots, that AI comes in peace, that it has no intention to destroy humanity, and so on. Heres an excerpt from the article:
For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me.
This suggests that GPT-3 knows what it means to wipe out, eradicate, and at the very least harm humans. It should know about life and health constraints, survival, limited resources, and much more.
But a series of experiments by Gary Marcus, cognitive scientist and AI researcher, and Ernest Davis, computer science professor at New York University, show that GPT-3 cant make sense of the basics of how the world works, let alone understand what it means to wipe out humanity. It thinks that drinking grape juice will kill you, you need to saw off a door to get a table inside a room, and if your clothes are at the dry cleaner, you have a lot of clothes.
All GPT-3 really has is a tunnel-vision understanding of how words relate to one another; it does not, from all those words, ever infer anything about the blooming, buzzing world, Marcus and Davis write. It learns correlations between words, and nothing more.
As you delve deeper into The Guardians GPT-3 written article, youll find many references to more abstract concepts that require rich understanding of life and society, such as serving humans, being powerful and evil, and much more. How does an AI that thinks you should wear a bathing suit to court thinks it can serve humans in any meaningful way?
GPT-3 also talks about feedback on its previous articles and frustration about its previous op-eds having been killed by publications. These would all appear impressive to someone who doesnt know how todays narrow AI works. But the reality is, like DeepMinds AlphaGo, GPT-3 neither enjoys nor appreciates feedback from readers and editors, at least not in the way humans do.
Even if GPT-3 had singlehandedly written all this article (well get to this in a bit), it can at most be considered a good word spinner, a machine that rehashes what it has seen before in an amusing way. It shows the impressive feats large deep learning models can perform, but its not even close to what we would expect from an AI that understands language.
In the postscript of the article, The Guardians staff explain that to write the article, they had given GPT-3 a prompt and intro and told to generate a 500-word op-ed. They ran the query eight times and used the AIs output to put together the complete article, which is a little over 1,100 words.
The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI, The Guardians staff write, after which they add, Editing GPT-3s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.
In other words, they cherry-picked their article from 4,000 words worth of AI output. That, in my opinion, is very questionable. Ive worked with many publications, and none of them have ever asked me to submit eight different versions of my article and let them choose the best parts. They just reject it.
But I nonetheless find the entire process amusing. Someone at The Guardian came up with an idea that would get a lot of impressions and generate a lot of ad revenue. Then, a human came up with a super-click bait title and an awe-inspiring intro. Finally, the staff used GPT-3 like an advanced search engine to generate some text from its corpus, and the editor(s) used the output to put together an article that would create discussion across social media.
In terms of educating the public about advances in artificial intelligence, The Guardians article has zero value. But it perfectly shows how humans and AI can team up to create entertaining and moneymaking BS.
- Good Job, Whale - The Cut - September 24th, 2020
- What the strange case of horse mutilations in France reveals about our state of mind - The Guardian - September 24th, 2020
- Review: The Flaming Lips dig deep with American Head - The Rice Thresher - September 24th, 2020
- FREE Self Development Series: "Curbing Traffic Jam in the Mind" - Patch.com - September 24th, 2020
- Happy Gut, Happy Mind: how the state of your gut affects your mental health - Evening Standard - September 24th, 2020
- How DeepMind Algorithms Helped Improve the Accuracy of Google Maps? - Analytics Insight - September 15th, 2020
- Elon Musk's brain-computer startup is getting ready to blow your mind - ZDNet - September 15th, 2020
- Far from being anti-religious, faith and spirituality run deep in Black Lives Matter - The Conversation US - September 15th, 2020
- Nvidia's Arm takeover sparks concern in the UK, co-founder says it's 'a disaster' - CNBC - September 15th, 2020
- What Is Yoga Nidra? Health Essentials from Cleveland Clinic - Health Essentials from Cleveland Clinic - September 15th, 2020
- Deep Dive: What would it take to change UNCW's mind? [Free] - Port City Daily - September 14th, 2020
- Eternal Blizzard in the Tired Mind: Kaufman delves ever deeper into the human psyche - The Stanford Daily - September 14th, 2020
- SoftBanks Arm sale hits a snag as UK opposition party warns of risk to jobs and digital sovereignty - CNBC - September 14th, 2020
- Scientific Psi? Neuralink and the smarter brain - Covalence - September 14th, 2020
- How to regulate AI, according to the 1967 Outer Space Treaty - Quartz - September 14th, 2020
- Reporter's notebook/It's time to let the games begin - The Daily Times - September 14th, 2020
- DCPS students receive online instruction from teacher in Greece - The Owensboro Times - September 14th, 2020
- Expand your mind with access to over 1,000 lectures from Tim Ferriss, Malcolm Gladwell, and more - MarketWatch - August 28th, 2020
- Global Machine Learning Artificial intelligence Market 2025 To Expect Maximum Benefit and Growth Potential During this COVID 19 Outbreak: AIBrain,... - August 28th, 2020
- It plays with the mind - Bangalore Mirror - August 28th, 2020
- We criticize because we care - Observer Online - August 28th, 2020
- Exercising toward a healthy mind - Johns Hopkins News-Letter - August 28th, 2020
- 78 percent parents don't mind if children have to skip school year due to pandemic: Survey - EdexLive - August 28th, 2020
- TSMC and Graphcore Prepare for AI Acceleration on 3nm - AnandTech - August 28th, 2020
- Frank Njenga: Over 40 years of healing the mind - Business Daily - August 28th, 2020
- Emily Dickinson is the unlikely hero of our time - The Conversation US - August 28th, 2020
- Visiongain publishes Automation in Biopharma Industry 2020-2030 report - PR Newswire UK - August 26th, 2020
- Scoop: The Trump-Navarro mind meld on the FDA - Axios - August 26th, 2020
- Emily Dickinson is the unlikely hero of our time - Huron Daily Tribune - August 26th, 2020
- 5 ways to ease pain using the mind-body connection - Harvard Health Blog - Harvard Health - August 26th, 2020
- COVID-19: Heal Your Diseases and Illness with Your Subconscious Mind - CEOWORLD magazine - August 26th, 2020
- London's 'Tech City' may never be the same again after the coronavirus - CNBC - August 26th, 2020
- Environmental Sustainability And AI - Forbes - August 26th, 2020
- BetMGM Has Crazy 100-1 Odds On The Yankees or Nats To Go Deep Tonight - Crossing Broad - July 25th, 2020
- Chest-Deep' Wildflowers Are Blooming in Reds Meadow - NBC Southern California - July 25th, 2020
- The Man-Eater part 3 - Lake County Record-Bee - July 25th, 2020
- SPIRIT MATTERS: Breathing through the day - LaSalle News Tribune - July 25th, 2020
- Coping with COVID: These stunning Instagram artworks dive deep into the mind - Cond Nast Traveller India - July 8th, 2020
- COVID-19 Update: Global Mindfulness Meditation Apps Market is Expected to Grow at a Healthy CAGR with top players Deep Relax, Smiling Mind, Inner... - July 8th, 2020
- Pralaya Yoga's Adult Nap Time Could Be the Lullaby You're Needing in 2020 - Houstonia Magazine - July 8th, 2020
- Mindfulness Meditation Apps Market (impact of COVID-19) To See Massive Growth By 2026| Committee for Children, Insights Network, Inc., Smiling Mind,... - July 8th, 2020
- 'McMindfulness,' and why deep breathing isn't always the answer - Dailyuw - July 8th, 2020
- Deep cultural shifts required: open letter from 500 legal women calls for reform of way judges are appointed and disciplined - Women's Agenda - July 8th, 2020
- Mental Health and Coping Tips - Wexner Medical Center - The Ohio State University - July 8th, 2020
- Yes, Mr Cummings, energy efficiency is boring if you have a one inch deep mind - The Fifth Estate - July 7th, 2020
- Breaking News - Documentary "Stockton on My Mind" Debuts July 28 on HBO - The Futon Critic - July 7th, 2020
- The Importance and the Consequences of Writing a Will - The Wire - July 7th, 2020
- Deep cultural shifts required: open letter from 500 legal women calls for reform of way judges are appointed and disciplined - UNSW Newsroom - July 7th, 2020
- Married to the Military: Dont neglect your mental health - The Fayetteville Observer - July 7th, 2020
- Suddenly, The Investment Market Comes Out Of The Deep Freeze - Bisnow - July 7th, 2020
- In Conversation: Thandie Newton - Vulture - July 7th, 2020
- Backstory: Emergency by Any Other Name Cuts as Deep - The Wire - July 4th, 2020
- FAMILY MATTERS: Open eyes and mind to teen's ascent to adulthood - Eagle-Tribune - July 4th, 2020
- Experts see 'deep trouble' for Trump overcoming plummet in the polls - Missoula Current - July 4th, 2020
- Love that stifles... - Deccan Herald - July 4th, 2020
- The long road back from COVID-19 - The Boston Globe - July 4th, 2020
- The Ripple: Healing Divisive Minds and Grieving Hearts - Aledo Times Record - July 2nd, 2020
- Deep sea coral garden found in the unlikeliest of places - Metro.co.uk - July 2nd, 2020
- Georgia on my mind - The Parliament Magazine - July 2nd, 2020
- COVID-19 and Teletherapy May Be Changing How Much You Know About Your Therapist - Michigan Medicine - July 2nd, 2020
- A conversation about racism opened my eyes The Foothills Focus - Foothills Focus - July 2nd, 2020
- Eric and Wendy Schmidt back Cambridge University effort to equip researchers with A.I. skills - CNBC - June 29th, 2020
- Change Our Mind: it is your civic duty to buy performance estates - Top Gear - June 29th, 2020
- Research: Artificial neural networks are more similar to the brain than we thought - TNW - June 29th, 2020
- Moving forward by turning to the past: Oregon Historical Quarterly takes a deep dive into Oregons white supremacist roots - KGW.com - June 29th, 2020
- Trump's scorched-earth handling of environment extends to oceans too - Las Vegas Sun - June 29th, 2020
- Is that really everything you have to give? | Yon - Tallahassee Democrat - June 29th, 2020
- Why Facebook chose to open its first European A.I. lab in Paris and not London - CNBC - June 23rd, 2020
- Decoding the link between Artificial Neural Networks and Deep Learning Algorithms - Analytics Insight - June 23rd, 2020
- Deftones' 'White Pony': 10 Things We Learned From Chino Moreno - Revolver Magazine - June 23rd, 2020
- Stranger Things: David Harbour compares Hopper to Gandalf the White - lcbcradio.com - June 23rd, 2020
- #9 Ways How Meditation Can Strengthen Our Mind, Body, and Soul - Entrepreneur - June 23rd, 2020
- Which Type of Yoga Should You Try? - United News of Bangladesh - June 23rd, 2020
- If You Want to Change, Start from the Ground Up - SFGate - June 23rd, 2020
- The Ripple: Equality and Balance is an Inside Job - Newton Press Mentor - June 20th, 2020
- Matters of the mind: Impact of lockdown and Covid-19 on marriages - The Indian Express - June 20th, 2020
- Something to Muench on: In a word, priceless! - The Cross Timbers Gazette - June 20th, 2020
- What came to mind when I acted dead - Daily Trust - June 20th, 2020
- The race to develop AI chips heats up as Graphcore says it's shipped 'tens of thousands' - CNBC - June 9th, 2020
- Stay Home, Watch Horror: 5 Deep Space Horror Movies to Stream This Week - Bloody Disgusting - June 9th, 2020