How to regulate AI, according to the 1967 Outer Space Treaty – Quartz

World leaders attempts to forge global AI regulations have been half-hearted and halting.

In 2018, Canada and France spearheaded an effort to form a regulatory body for AI, backed by the G7but the US spiked the effort, arguing that it would crimp American innovation. Instead, the OECD and the G20 adopted a set of AI principles the following year, while the EU and the World Economic Forum each came up with their own.

They dont really have teeth, and theyre also very fragmented, said Marietje Schaake, the international policy director for Stanfords Cyber Policy Center and former member of the European Parliament representing the Netherlands. To head offAI-enabled human rights abuses (and the chaos of conflicting regulations), global leaders could turn to another set of rules governing a powerful new technology: the 1967 Outer Space Treaty.

In a blog post, DeepMind researcher and Cambridge fellow Verity Harding argued that the Cold War space agreement offers a roadmap for international cooperation achieved at a time at least as unsafe and complicated as todays worldif not considerably more so.

In 1967, as the US and the Soviet Union sprinted to develop their spacefaring capabilities, concern grew that world powers might use space as a staging ground for weapons of mass destruction. The space accords sought to keep nukes out of orbit and to establish that celestial bodies couldnt be colonized or used for military purposes.

The text of the treaty is instructive. Where AI ethics statements are mushythe OECD blandly declares that AI should benefit people and the planetthe space accords are firm. The 1967 treaty states in no uncertain terms, for example, that the establishment of military bases, installations and fortifications, the testing of any type of weapons and the conduct of military maneuvers on celestial bodies shall be forbidden.

The document is also rather short, limited to the areas where global leaders could find broad agreement. Harding argues this was a shrewd approach to getting a deal done in time to have a real impact: Not letting the best be the enemy of the good meant that by the time man landed on the moon we had a global political framework as a foundation on which to build. (Harding did not respond to requests for an interview.)

But it also meant global leaders had to scramble to fill in the gaps later. Zia Khan, who heads the Rockefeller Foundations work on technology and innovation, says these sorts of limitations are inevitable in any treaty. If we just try to come up with rules, they would probably not be correct, or get out of date, or be mostly right but wed have no way to tweak them, he said.

Khan argues that, in addition to a first pass at international law, global leaders must also create a rule-making body that can adjust regulations as the world changes.

And theres one key difference between rocket ships in the 1960s and AI today, Khan points out: Algorithms are already ubiquitous, and businesses are increasingly using them to automate operations. We need people to see this as important, he said. If we dont get our arms around AI now, well end up where we are with the climate because we didnt think hard enough about how we use oil and energy.

Go here to see the original:
How to regulate AI, according to the 1967 Outer Space Treaty - Quartz

Related Post

Comments are closed.