The hidden cost of the AI boom: social and environmental exploitation – BusinessWorld Online

Mainstream conversations about artificial intelligence (AI) have been dominated by a few key concerns, such as whether super intelligentAIwill wipe us out, or whetherAIwill steal our jobs. But weve paid less attention the various otherenvironmentalandsocialimpacts of our consumption ofAI, which are arguably just as important.

Everything we consume has associated externalities the indirect impacts of our consumption. For instance, industrial pollution is a well-known externality that has a negative impact on peopleandthe environment.

The online services we use daily also have externalities, but there seems to be a much lower level of public awareness of these. Given the massive uptake in the use ofAI, these factors mustnt be overlooked.

In 2019, French think tank The Shift Project estimated that the use of digital technologies produces more carbon emissions than the aviation industry.AndalthoughAIis currently estimated to contribute less than 1% of total carbon emissions, theAImarket size is predicted to grow ninefold by 2030.

Tools such as ChatGPT are built on advanced computational systems called large language models (LLMs). Although we access these models online, they are runandtrained in physical data centers around the world that consume significant resources.

Last year,AIcompany Hugging Face published an estimate of the carbon footprint of its own LLM called BLOOM (a model of similar complexity to OpenAIs GPT-3).

Accounting for the impact of raw material extraction, manufacturing, training, deploymentandend-of-life disposal, the models developmentandusage resulted in the equivalent of 60 flights from New York to London.

Hugging Face also estimated GPT-3s life cycle would result in ten times greater emissions, since the data centers powering it run on a more carbon-intensive grid. This is without considering the raw material, manufacturinganddisposal impacts associated with GTP-3.

OpenAIs latest LLM offering, GPT-4, is rumored to have trillions of parametersandpotentially far greater energy usage.

Beyond this, runningAImodels requires large amounts of water. Data centers use water towers to cool the on-site servers whereAImodels are trainedanddeployed. Google recently came under fire for plans to build a new data centre in drought-stricken Uruguay that would use 7.6 million liters of water each day to cool its servers, according to the nations Ministry of Environment (although the Minister for Industry has contested the figures). Water is also needed to generate electricity used to run data centers.

In a preprint published this year, Pengfei Liandcolleagues presented a methodology for gauging the water footprint ofAImodels. They did this in response to a lack of transparency in how companies evaluate the water footprint associated with usingandtrainingAI.

They estimate training GPT-3 required somewhere between 210,000and700,000 liters of water (the equivalent of that used to produce between 300and1,000 cars). For a conversation with 20 to 50 questions, ChatGPT was estimated to drink the equivalent of a 500 milliliter bottle of water.

LLMs often need extensive human input during the training phase. This is typically outsourced to independent contractors who face precarious work conditions in low-income countries, leading to digital sweatshop criticisms.

In January, Time reported on how Kenyan workers contracted to label text data for ChatGPTs toxicity detection were paid less than US$2 per hour while being exposed to explicitandtraumatic content.

LLMs can also be used to generate fake newsandpropaganda. Left unchecked,AIhas the potential to be used to manipulate public opinion,andby extension could undermine democratic processes. In a recent experiment, researchers at Stanford University foundAI-generated messages were consistently persuasive to human readers on topical issues such as carbon taxesandbanning assault weapons.

Not everyone will be able to adapt to theAIboom. The large-scale adoption ofAIhas the potential to worsen global wealth inequality. It will not only cause significant disruptions to the job market but could particularly marginalize workers from certain backgroundsandin specific industries.

The wayAIimpacts us over time will depend on myriad factors. Future generativeAImodels could be designed to use significantly less energy, but its hard to say whether they will be.

When it comes to data centers, the location of the centers, the type of power generation they use,andthe time of day they are used can significantly impact their overall energyandwater consumption. Optimizing these computing resources could result in significant reductions. Companies including Google, Hugging FaceandMicrosoft have championed the role theirAIandcloud services can play in managing resource usage to achieve efficiency gains.

Also, as direct or indirect consumers ofAIservices, its important were all aware that every chatbot queryandimage generation results in waterandenergy use,andcould have implications for human labour.

AIs growing popularity might eventually trigger the development of sustainability standardsandcertifications. These would help users understandandcompare the impacts of specificAIservices, allowing them to choose those which have been certified. This would be similar to the Climate Neutral Data Centre Pact, wherein European data centre operators have agreed to make data centers climate neutral by 2030.

Governments will also play a part. The European Parliament has approved draft legislation to mitigate the risks ofAIusage.Andearlier this year, the US senate heard testimonies from a range of experts on howAImight be effectively regulatedandits harms minimized. China has also published rules on the use of generativeAI, requiring security assessments for products offering services to the public. Reuters

Read more:

The hidden cost of the AI boom: social and environmental exploitation - BusinessWorld Online

Related Posts

Comments are closed.