Why firms need to scratch the surface of their AI investments – Money Management

The optimism behind disruptive artificial intelligence (AI) technology has driven markets to record highs, but experts warn there are risks and considerations that can be overlooked.

There has been a lot of talk around its many benefits across numerous sectors. According to a recent report titled, Australias Generative AI Opportunity, by Microsoft and the Tech Council of Australia, generative AI could contribute between $45 billion and $115 billion a year to Australias economy by 2030 through improving existing industries and enabling the creation of new products and services.

However, it also entails a number of environmental, social and corporate governance (ESG) concerns that range from data privacy and cyber security to job loss, misinformation and intellectual property.

The spectrum of risks arising from AI is wide, agrees Fidelity analyst and portfolio manager, Marcel Sttzel.

He said: On one end lies doomsday scenarios involving super intelligent AIs that their creators cant understand or control. More potential immediate threats include the spread of misinformation from large language models (LLMs), which are liable to hallucinate conjuring false facts or misinterpretations.

The complexity of the technology and difficulties in containing it are reflected in the efforts of regulators, which are mobilising but with little global cohesion. Industry-wide attempts to self-regulate have also gained little traction.

In May, the Centre for AI Safety (CAIS), a San Francisco-based research nonprofit, released a one sentence Statement on AI Risk, which was signed by over 100 professors of AI. It said that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Even Sam Altman, the co-founder of OpenAI, has expressed concerns and called for greater regulation of AI development, looking into compliance and safety standards, audits, and potential deployment and security concerns.

But the burden isnt just regulatory, Sttzel said.

He added: Large holders of capital have proven their ability to move the needle on existential issues by engaging with companies on ESG issues such as climate change or employee welfare, and holding firms to account for transgressions. Given the potential dangers related to artificial intelligence, now is the time for investors to assess their investees use of this powerful tool.

Speaking on a Fidante podcast, Mary Manning, portfolio manager of the Global Sustainable Fund at Alphinity, discussed the importance of considering AI from an ESG perspective.

For her, a particular concern is the development of AI to become sentient with the ability to process thoughts and feelings.

If you think about AI and the possibility of AI will become sentient at some point, if you think about that over the long term, then if we get AI wrong and robots or sentient beings start to take over, then that is a very big threat to humanity, arguably even more so than climate change.

The firm has since announced a year-long research program with Australias national science agency, Commonwealth Scientific and Industrial Research Organisation (CSIRO), that aims to identify best practices and provide a framework to assess, manage and report on responsible AI risks.

Jessica Cairns, Alphinitys head of sustainability and ESG, believes the technology has a lot of potential for good, however the governance, design and application of AI need to be undertaken in a responsible and ethical way.

Through its research so far, the firm has identified some common examples of good practices, like governance bodies to help guide the strategic direction of AI use and development; a clear AI policy or framework; and an integrated approach with existing risk management frameworks.

Although many companies see the increased use of AI as transformational, most recognise the risks around human capital and workforce, Cairns told Money Management.

For companies that are looking to deploy AI internally, we have heard that managements are focused on how they can augment different roles to reduce the amount of repetitive or mundane tasks, rather than replacing roles altogether.

Similar to the energy transition, we believe a focus on employee engagement and participation is going to be key for companies to ensure the responsible adoption of AI in the context of employee wellbeing.

Reflecting on developments in this space, Betashares director for responsible investments, Greg Liddell, recognised it is too early to predict the lasting impact of AI although there have certainly been many benefits and risks identified so far.

In terms of negatives, there has been much discussion on automation and job losses and on bots that can perpetuate biases and negativities present on the internet.

Liddell said: AI will create solutions across a range of fields and applications. It will potentially generate enormous wealth for those at the forefront of its development and implementation.

But AI needs guardrails to safeguard its development, and ethical investors need to be aware of how companies are using AI and the risks it poses.

Link:

Why firms need to scratch the surface of their AI investments - Money Management

Related Posts

Comments are closed.