One think tank vs. ‘god-like’ AI – POLITICO

With help from Derek Robertson

The OpenAI website. | Marco Bertorello/AFP/Getty Images

A few short years ago, Daniel Colson was taking a startup investment from OpenAI founder Sam Altman and rubbing shoulders with other AI pioneers in the Bay Area tech scene.

Now, the tech entrepreneur is launching a think tank aimed at recruiting Washingtons policymakers to stop his one-time funder. Colson views it this way: The top scientists at the biggest AI firms believe that they can make artificial intelligence a billion times more powerful than todays most advanced models, creating something like a god within five years.

His proposal to stop them: Prevent AI firms from acquiring the vast supplies of hardware they would need to build super-advanced AI systems by making it illegal to build computing clusters above a certain processing power. Because of the scale of computing systems needed to produce a super-intelligent AI, Colson argues such endeavors would be easy for governments to monitor and regulate.

I see that science experiment as being too dangerous to run, he said.

As Washingtons policy scene reorients toward AI, Colson, 30, is the latest comer who sees cosmic stakes in the looming fights over the technology. But his Artificial Intelligence Policy Institute is looking to start with a humbler contribution to the emerging policy landscape: Polling.

Last week, AIPI released its first poll, based on a thousand respondents, finding that 72 percent of American voters support measures to slow the advance of AI.

Lamenting a lack of quality public polling on AI policy, Colson said he believes that such polls have the potential to shift the narrative in favor of decisive government action ahead of looming legislative fights.

To do that, Colsons enlisted a roster of tech entrepreneurs and policy wonks.

AI safety is just massively under-surveyed, said Sam Hammond, an AI safety researcher listed among AIPIs advisors.

Colson is also getting advice from one advisor who goes unmentioned on AIPIs website. Progressive pollster Sean McElwee, an expert in using polling to shape public opinion who is best known for his relationships with the Biden White House and Sam Bankman-Fried is advising Colson behind the scenes.

A spokesman for Colson, Sam Raskin, described McElwee as one of many advisers. McElwee, who was ousted last year from the left-wing polling firm Data for Progress, reportedly, in part over his Bankman-Fried ties, did not respond to a request for comment.

As AI safety proponents confront the technologys rapid advance, Colson has been participating in calls convened in recent months by Rethink Priorities a nonprofit launched in 2018 to formulate a policy response among like-minded researchers and activists. Rethink Priorities is associated with Effective Altruism, a utilitarian philosophy that is widespread in the tech world.

Though many Effective Altruists also worry about AIs potential existential risks, Colson distances himself from the movement.

He traces his misgivings to his attendance at an Effective Altruism gathering at the University of Oxford in 2016, where Google DeepMind CEO Demis Hassabis gave a talk assuring attendees the company considered AI safety a top priority.

All of the [Effective Altruists] in the audience were extremely excited and started clapping, Colson recalled. I remember thinking Man, I think he just co-opted our movement.

(A spokeswoman for DeepMind said Hassabis has always been vocal about how seriously Google DeepMind takes the safe and responsible deployment of artificial intelligence.)

A year later, Colson co-founded Reserve, a stablecoin-focused crypto startup that landed investments from Altman and Peter Thiel. He found himself running in the same circles as many of the people who were then laying the foundations for the current AI boom.

But Colson said that his experience as a Bay Area tech founder left him with the conviction that AI scientists vision for advancing the technology is unsafe. OpenAI did not respond to a request for comment.

Colson also concluded that Effective Altruists vision for containing AI is too focused on technological fixes while ignoring the potential for government regulation to ensure public safety.

That motivated the launch of AIPI, he said. The groups funding has come from a handful of individual donors in the tech and finance worlds, but Colson declined to name them.

In addition to more polling, AIPI is planning to publish an analysis of AI policy proposals this fall. Colson said he views the next 18 months as the best window for passing effective legislation.

Because of the industrial scale of computing needed to achieve the ambitions of AI firms, he argues that computing clusters are a natural bottleneck at which to focus regulation. He estimates the measure could forestall the arrival of computer super-intelligence by about 20 years.

Congress, he suggested, could cap AI models at 10 to the 25th flops, a measure of the speed at which computers can perform complex calculations. (By comparison, ChatGPT-2, which was state of the art in 2019, was trained with 10 to the 21 flops, Colson said.) Or better yet, he said, set the cap five orders of magnitude lower, at 10 to the 20th flops. Thats what I would choose.

The University of Tokyo. | AFP via Getty Images

With its population shrinking and workforce transforming, Japan is counting on AI to help its society remain dynamic and innovative.

Michael Rosen, an analyst at the libertarian-leaning American Enterprise Institute, reported in a blog post this morning on his recent trip to the nation where he interviewed experts in both the private and public sectors about what AI can do for Japans rapidly aging society. For example: A chief at SoftBank Robotics boasted to Rosen of the companys efforts to combine AI brains with robotic bodies, which could help with the countrys rapidly aging janitorial corps.

Yasuo Kuniyoshi, a University of Tokyo AI researcher, argued to Rosen that robots Sharing a similar body is a very important basis for empathy, and described how his research explores the very early proto-moral sense of humanity that AI invokes. The ethical considerations that raises in the actual deployment of such human-like AI tools demand a policy response, as Rosen notes, saying the people he spoke to in Japan were broadly supportive of a government-driven approach, even as they largely disregarded the doomsday mindset of some Western anti-AI advocates. Derek Robertson

As todays digital architects build their new platforms, theyre usually pretty vocal about not repeating the mistakes of yesterday especially when it comes to the unintended harms that social media platforms like Facebook might have caused.

But maybe they need not worry so much. A wide-ranging, in-depth new study published last week in the peer-reviewed journal Royal Society Open Science finds no evidence suggesting that the global penetration of social media is associated with widespread psychological harm.

To create a sample size of nearly a million individuals over 11 years in 72 countries, authors Matti Vuorre and Andrew K. Przybylski tracked Facebook usage using data from the company and matched it with the Gallup World Polls data on well-being. They conclude that it is not obvious or necessary that their wide adoption has influenced psychological well-being, for better or for worse.

However, they do note that their results might not be able to generalize across different platforms like Snapchat or TikTok, and that to move past description, the goal of this study, to prediction or evidence-based intervention, independent scientists and online platforms will need to collaborate in new, transparent ways. Derek Robertson

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); and Steve Heuser ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

See the rest here:

One think tank vs. 'god-like' AI - POLITICO

Related Posts

Comments are closed.