AI ethics experts warn safety principals could lead to ‘ethicswashing’ – Citywire

New safeguarding artificial intelligence (AI) principles adopted by investors could do more harm than good. AI ethics experts are warning that overreliance on best practice risks create theimpression that the technology sits outside existing laws.

This week, four of the most influential AI companies announced the formation of an industry body whose stated goal is to promote responsible AI.

However, the Frontier Model Forum formed by ChatGPT developer OpenAIand AI startup Anthropicand their main investors Microsoft and Google has caused concern inthe AI ethics community around its effectiveness.

AI experts fear such forums could lead to ethicswashing, with one AI and data ethics advisor warning investors that signing up to new AI principles could lead people to believe that the fast-developing technology is not accountableunderthe law.

These fears come at a time when the hottest thing in equity markets today is AI. Among the best-known AI plays is Citywire Elite Companies AAA-rated Nvidia(US:NVDA), whose shares have more than tripled this year.

AI ethics and data advisor Ravit Dotan said the Frontier Model Forums activities and membership criteria do not mandate any action to actively identify or mitigate risks. By joining the Forum, she said, companies get to be seen as ethical because one of the criteria for membership is demonstrating a strong commitment to AI ethics themes.She added that the Forums objectives offer nothing new.

AI technology should already adhere to existing laws. Signing up to new principles suggests that AI technology somehow sits outside of these laws, which it does not, she said.

The laws around discrimination and data privacy apply to these technologies now, so what I would say to investors is: Do the due diligence in making sure you are not investing in companies that break these laws.

Dotansaid of the Forum: This initiative looks like a flash from the past. There are many initiatives of this kind already, but the surgein AI ethics research and best practices has not been accompanied by actions. The time for initiatives that only identify best practices, advanced research and facilitate knowledge sharing is over.

How are the asset managers with holdings in Microsoft and Alphabet reacting to these concerns?

In Europe alone, 206 open-ended funds with the highest sustainable investment objectives hold a combined 11.1million Microsoft and Alphabet shares, according to Morningstar data. These wereworth 3bn as of 30 June.

In the table below are some of the Luxembourg- or Ireland-domiciled funds with more than 1% of their assets in Microsoft and Alphabet.

Citywire Selector contacted 10 asset managers holding Luxembourg- or Ireland-domiciled ESG funds with more than 1% of assets in Microsoft and Alphabetto ask about their engagement practices on ethics and safety.

While Morgan Stanley and Danske declined to comment, those who responded said they have signed up to uphold safeguarding principles.

Fidelity International said its ESG team has been engaging with the World Benchmarking Alliance (WBA) to address concerns on safety and ethics.

We have felt it is extremely important to raise levels of understanding and discussion about issues of digital ethics broadly, and to promote commitments from companies to best practices and disclosures regarding the responsible development and deployment of artificial intelligence specifically,Fidelity International said.

Mirroring some of Dotans concerns, a Candriam spokesperson said it also recognises the limitations of new laws and stressed its backing of ethical practice.

We welcome the recent EU AI Act, which is one of the strongest pieces of legislation in the world. But it relies greatly on companies to self-assess the level of risk of their products and services. Hence, the importance that companies adopt strong ethical practices, the spokesperson said.

Additionally, Candriam said it has been taking an active role in several initiatives addressing responsible AI, including WBAs Responsible AI initiative, big tech and human rights engagement, Ranking Digital Rights engagement and Corporate Human Rights Benchmark engagement on human rightsdue diligence.

Johannes Lenhard, anthropologist of ethics and venture capital at the University of Cambridge, said more due diligence needs to be done by asset managers and venture capitalists (VCs), who are the first to back the next generation of major AI players. He added that VCs must be investors first line of attack.

VCs need to be under more pressure to do their homework right now, as the companies they are funding will be taken over by public investors in five to eight years time, he said.

If someone had thought about the unintended ESG consequences of Facebook when it was first invented, we may not have had to deal with all the trouble that has come out of social media today. AI is a parallel to that.

For Lenhard, the investor community is making the same mistakes again.

OpenAI is already going to be very hard to influence at this stage, but yet everyone, including the regulator, is focusing on companies that are now established when they should be looking at what the next generation of AI will look like.

Asset managers can ask how these firms are mitigating against doing harm. It is a very simple question that is not about reporting or requires metrics.

Talk of AI regulation was amplified last month after hundreds of AI chiefs, tech chiefs and executives signed a statement by the Centre for AI Safety, which said that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

What followed was executives from OpenAI, Google, Microsoft and other tech firms publicly calling for AI regulation. Since then, the Federal Trade Commission has opened an investigation into OpenAI to find out whether the maker of ChatGPT has violated consumer protection laws by putting personal data at risk.

For Dotan, the letter was wrongheadedand almost malicious in its intent.

There is a split within the ethics community around long-termism and near-termism, which explains the background of this letter. These tech bros foster long-termism to deflect from the real harms their firms are doing now, she said.

It is easier for them to point to far-off science fiction doomsday scenarios rather than address current issues.

Additionally, Dotan said AI can threaten human existence without becoming all-powerful or sentient.

It does not have to happen through artificial general intelligence, it could happen in more mundane ways, she said.

The carbon emissions and the water footprint from AI development would be so wild that we would become extinct because of climate change. Secondly, discrimination could become so systematic because of AI that we would have race wars and kill ourselves. Or disinformation would become so common that no one would know what is true or false, so that when the next pandemic comes, we all die.

Read the original:

AI ethics experts warn safety principals could lead to 'ethicswashing' - Citywire

Related Posts

Comments are closed.