Will Artificial Intelligence replace human authors in the near future? – The New Indian Express

About one year ago, British newspaper The Guardian ran an article titled A robot wrote this entire article. Are you scared yet, human?, written by an Artificial Intelligence (AI)-enabled robot called GPT-3 (Generative Pre-trained Transformer 3). It is an autoregressive language model that uses deep learning to produce human-like text. GPT-3 was fed a short introduction and was instructed to write an op-ed of around 500 words in simple language, focusing on why humans have nothing to fear from AI. In response, it produced eight different essays. The Guardian picked the best parts of each and ran the edited piece. GPT-3 even quoted Mahatma Gandhi in its article.

A rapid revolution in the field of AI and Natural Language Processing (NLP) is going on. While the worlds first-ever AI-written novel was published in Russia in 2008, the first full-length Korean novel, written by an AI named Birampung, hit the shelves in August. Birampung refers to a fierce storm that strikes at the beginning and end of the universes creation. The 560-page novel was directed by the novelist and mathematician Kim Tae-yon. Kim was reluctant to share the details of the technology involved. But 1,000 books were loaded in Birampungs operating system and it was equipped with the most advanced deep autonomous learning algorithm. Like a true film director, Kim picked the storyline, background and characters, but the actual writing process and the composition were made by Birampung. The novel, the name of which has been translated in English as The World from Now On, took seven years to complete and it consists of five stories in which the protagonistsa disabled amateur mathematician, a math professor and entrepreneur, a psychiatrist, an astrophysicist, and a Buddhist monkwere drawn to each other in their individual quests to understand the meaning of human existence.

Is there any existential threat for writers now? Consider GPT-3, the third-generation language prediction model in the series created by OpenAI, an artificial intelligence research company founded by Tesla billionaire Elon Musk among others. What exactly is going on inside GPT-3? An MIT Technology Review article stated: What it seems to be good at is synthesising text it has found elsewhere on the internet, making it a kind of vast, eclectic scrapbook created from millions and millions of snippets of text that it then glues together in weird and wonderful wayson demand.

GPT-3 can also produce pastiches of particular writers. For instance, when given the title, the authors name, and the initial word It, the AI produced a short story called The importance of being on Twitter, written in the style of Jerome K Jerome. It even wrote a reasonably informative article about GPT-3.

Playing with GPT-3 feels like seeing the future, is what some experts feel. There are plenty of shortcomings of AIs though. Their language is not always polished. And many people spotted a lack of depth, with the text reading more like cut-and-paste jobs. Some experts have felt that GPT-3s program does nothing more than match words and phrases based on statistical correlations among those in its database. In a March 2021 article published in the journal Nature, Matthew Hutson discusses the rise and risks of language-generating AI. Hutson opines that a remarkable AI can write like humans, but it still lacks common sense in the process of understanding how the world works, physically and socially. For example, when asked, How many rainbows does it take to jump from Hawaii to seventeen? GPT-3 responded: It takes two rainbows to jump from Hawaii to seventeen.

In The Guardian piece, GPT-3 wrote: I am only a set of code, governed by lines upon lines of code that encompass my mission statement. GPT-3 had been trained in around 200 billion words, at an estimated cost of tens of millions of dollars. The AI thus still needs a human editor to tether its writings to reality. In fact, a few days after the op-ed written by GPT-3 was published, a follow-up letter titled A human wrote this article. You shouldnt be scared of GPT-3 was published in The Guardian. The author, Albert Fox Cahn, argued that while GPT-3 is quite impressive it is useless without human input and edits. GPT-3 is just the latest example of computer-assisted authorship, the process by which human authors use technology to enhance the writing process, Cahn wrote. American programmer-poet Allison Parrish also noted: Attributing (The Guardian article) to AI is sort of like attributing the pyramids to the Pharaoh. Pharaoh didnt do that. The workers did.

GPT-3 is an artificial neural network with over 175 billion parameters that uses only 0.12% of its cognitive capacity. Its certainly a big leap forward from GPT-2 that had 1.5 billion parameters. When GPT-4 or GPT-5 rolls around in the future, should human writers really feel dread? Will AI measure up to J K Rowling or Kazuo Ishiguro, or report on Afghanistan? In his Nature paper, Hutson wrote: Its possible that a bigger model would do betterwith more parameters, more training data, more time to learn. But this will get increasingly expensive and cant be continued indefinitely. The opaque complexity of language models creates another limitation. Still, would some GPT-n or equivalent AI be able to produce a Tagores song or a Shakespeares play in the near future? A new technological anxiety would, however, invariably evolve around it.

P.S: This article has been completely written by a human being, not an AI.

Atanu BiswasProfessor of Statistics, Indian Statistical Institute, Kolkata(appubabale@gmail.com

View post:
Will Artificial Intelligence replace human authors in the near future? - The New Indian Express

Related Posts

Comments are closed.