US and Great Britain Forge AI Safety Pact –

The U.S. and U.K. have pledged to work together on safe AI development.

Theagreement, inked on Monday (April 1) by U.S. Commerce SecretaryGina Raimondoand U.K. Technology SecretaryMichelle Donelan, will see the AI Safety Institutes of both countries collaborate on tests for the most advanced artificial intelligence (AI) models.

The partnership will take effect immediately and is intended to allow both organizations to work seamlessly with one another, theDepartment of Commercesaid in a news release.

AI continues to develop rapidly, and both governments recognize the need to act now to ensure a shared approach to AI safety which can keep pace with the technologysemerging risks.

In addition, the two countries agreed to forge similar partnerships with other countries to foster AI safety around the world. The institutes also plan to conduct at least one joint test on a publicly accessible model and to tap into a collective pool of expertise by exploring personnel exchanges between both organizations.

The agreement comes days after the White House unveiled a policy requiring federal agencies to identify and mitigate the potential risks of AI and todesignate a chief AI officer.

Agencies must also create detailed and publicly accessible inventories of their AI systems. These inventories will highlight use cases that could potentially impact safety or civil rights, such as AI-powered healthcare or law enforcement decision-making.

Speaking to PYMNTS following this announcement,Jennifer Gill, vice president of product marketing atSkyhawk Security, stressed the need for the policy to require uniform standards across all agencies.

If each chief AI officer manages and monitors the use of AI at their discretion for each agency, there will be inconsistencies, which leads to gaps, which leads to vulnerabilities, said Gill, whose company specializes in AI integrations for cloud security.

These vulnerabilities in AI can be exploited for a number ofnefarious uses. Any inconsistency in the management and monitoring of AI use puts the federal government as a whole at risk.

This year also saw the National Institute of Standards and Technology (NIST) launch the Artificial Intelligence Safety Institute Consortium(AISIC), is designed to promote collaboration between industry and government to foster safe AI use.

To unlockAIs full potential, we need to ensure there is trust in the technology,MastercardCEOMichael Miebachsaid at the time of the launch. That starts with a common set of meaningful standards that protects users and sparks inclusive innovation.

Mastercard is among the more than 200 members of the group, composed of tech giants such asAmazon,Meta,Google andMicrosoft, schools like Princeton and Georgia Tech, and a variety of research groups.

Go here to see the original:
US and Great Britain Forge AI Safety Pact -

Related Posts

Comments are closed.