What we lose when we work with a giant AI like ChatGPT – The Hindu

Recently, ChatGPT and its ilk of giant artificial intelligences (Bard, Chinchilla, PaLM, LaMDA, et al.), or gAIs, have been making several headlines.

ChatGPT is a large language model (LLM). This is a type of (transformer-based) neural network that is great at predicting the next word in a sequence of words. ChatGPT uses GPT4 a model trained on a large amount of text on the internet, which its maker OpenAI could scrape and could justify as being safe and clean to train on. GPT4 has one trillion parameters now being applied in the service of, per the OpenAI website, ensuring the creation of artificial general intelligence that serves all of humanity.

Yet gAIs leave no room for democratic input: they are designed from the top-down, with the premise that the model will acquire the smaller details on its own. There are many use-cases intended for these systems, including legal services, teaching students, generating policy suggestions and even providing scientific insights. gAIs are thus intended to be a tool that automates what has so far been assumed to be impossible to automate: knowledge-work.

In his 1998 book Seeing Like a State, Yale University professor James C. Scott delves into the dynamics of nation-state power, both democratic and non-democratic, and its consequences for society. States seek to improve the lives of their citizens, but when they design policies from the top-down, they often reduce the richness and complexity of human experience to that which is quantifiable.

The current driving philosophy of states is, according to Prof. Scott, high modernism a faith in order and measurable progress. He argues that this ideology, which falsely claims to have scientific foundations, often ignores local knowledge and lived experience, leading to disastrous consequences. He cites the example of monocrop plantations, in contrast to multi-crop plantations, to show how top-down planning can fail to account for regional diversity in agriculture.

The consequence of that failure is the destruction of soil and livelihoods in the long-term. This is the same risk now facing knowledge-work in the face of gAIs.

Why is high modernism a problem when designing AI? Wouldnt it be great to have a one-stop shop, an Amazon for our intellectual needs? As it happens, Amazon offers a clear example of the problems resulting from a lack of diverse options. Such a business model yields only increased standardisation and not sustainability or craft, and consequently everyone has the same cheap, cookie-cutter products, while the local small-town shops die a slow death by a thousand clicks.

Like the death of local stores, the rise of gAIs could lead to the loss of languages, which will hurt the diversity of our very thoughts. The risk of such language loss is due to the bias induced by models trained only on the languages that already populate the Internet, which is a lot of English (~60%). There are other ways in which a model is likely to be biased, including on religion (more websites preach Christianity than they do other religions, e.g.), sex and race.

At the same time, LLMs are unreasonably effective at providing intelligible responses. Science-fiction author Ted Chiang suggests that this is true because ChatGPT is a blurry JPEG of the internet, but a more apt analogy might be that of an atlas.

An atlas is a great way of seeing the whole world in snapshots. However, an atlas lacks multi-dimensionality. For example, I asked ChatGPT why it is a bad idea to plant eucalyptus trees in the West Medinipur district. It gave me several reasons why monoculture plantations are bad but failed to supply the real reason people in the area opposed it: a monoculture plantation reduced the food they could gather.

That kind of local knowledge only comes from experience. We can call that knowledge of the territory. This knowledge is abstracted away by gAIs in favour of the atlas view of all that is present on the internet. The territory can only be captured by the people doing the tasks that gAIs are trying to replace.

A part of the failure to capture the territory is demonstrated in gAIs lack of understanding. If you are careful about what you ask them for (a feat called prompt engineering an example of a technology warping the ecology of our behaviour), they can fashion impressive answers. But ask it the same question in a slightly different way and you can get complete rubbish. This trend has prompted computer scientists to call these systems stochastic parrots that is, systems that can mimic language but are random in their behaviour.

Positive research directions exist as well. For example, BLOOM is an open-source LLM developed by scientists with public money and with extensive filtering of the training data. This model is also multilingual, including 10 Indian languages, plus an active ethics team that regularly updates the licence for use.

There are multiple ways to thwart the risks posed by gAIs. One is to artificially slow the rate of progress in AI commercialisation to allow time for democratic inputs. (Tens of thousands of researchers have already signed a petition to this effect).

Another is to ensure there are diverse models being developed. Diversity here implies multiple solutions to the same question, like independent cartographers preparing different atlases with different incentives: some will focus on the flora while others on the fauna. The research on diversity suggests that the more time passes before reaching a common solution, the better the outcome. And a better outcome is critical when dealing with the stakes involved in artificial general intelligence an area of study in which a third of researchers believe it can lead to a nuclear-level catastrophe.

Just to be clear, I wrote this article, not ChatGPT. But I wanted to check what it would say

Q: Write a response to the preceding text as ChatGPT.

A: As ChatGPT, Im a tool meant to assist and augment human capabilities, not replace them; my goal is to understand and respond to your prompts, not to replace the richness and diversity of human knowledge and experience.

Yet as the writer George Zarkadakis put it, Every augmentation is also an amputation. ChatGPT & co. may assist and augment but at the same time, they reduce the diversity of thoughts, solutions, and knowledge, and they currently do so without the inputs of the people meant to use them.

Original post:

What we lose when we work with a giant AI like ChatGPT - The Hindu

Related Posts

Comments are closed.