How AI like ChatGPT could be used to spark a pandemic – Vox.com

New research highlights how language-generating AI models could make it easier to create dangerous germs.

Heres an important and arguably unappreciated ingredient in the glue that holds society together: Google makes it moderately difficult to learn how to commit an act of terrorism. The first several pages of results for a Google search on how to build a bomb, or how to commit a murder, or how to unleash a biological or chemical weapon, wont actually tell you much about how to do it.

Its not impossible to learn these things off the internet. People have successfully built working bombs from publicly available information. Scientists have warned others against publishing the blueprints for deadly viruses because of similar fears. But while the information is surely out there on the internet, its not straightforward to learn how to kill lots of people, thanks to a concerted effort by Google and other search engines.

How many lives does that save? Thats a hard question to answer. Its not as if we could responsibly run a controlled experiment where sometimes instructions about how to commit great atrocities are easy to look up and sometimes they arent.

But it turns out we might be irresponsibly running an uncontrolled experiment in just that, thanks to rapid advances in large language models (LLMs).

When first released, AI systems like ChatGPT were generally willing to give detailed, correct instructions about how to carry out a biological weapons attack or build a bomb. Over time, Open AI has corrected this tendency, for the most part. But a class exercise at MIT, written up in a preprint paper earlier this month and covered last week in Science, found that it was easy for groups of undergraduates without relevant background in biology to get detailed suggestions for biological weaponry out of AI systems.

In one hour, the chatbots suggested four potential pandemic pathogens, explained how they can be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis companies unlikely to screen orders, identified detailed protocols and how to troubleshoot them, and recommended that anyone lacking the skills to perform reverse genetics engage a core facility or contract research organization, the paper, whose lead authors include MIT biorisk expert Kevin Esvelt, says.

To be clear, building bioweapons requires lots of detailed work and academic skill, and ChatGPTs instructions are probably far too incomplete to actually enable non-virologists to do it so far. But it seems worth considering: Is security through obscurity a sustainable approach to preventing mass atrocities, in a future where information may be easier to access?

In almost every respect, more access to information, detailed supportive coaching, personally tailored advice, and other benefits we expect to see from language models are great news. But when a chipper personal coach is advising users on committing acts of terror, its not so great news.

But it seems to me that you can solve the problem from two angles.

We need better controls at all the chokepoints, Jaime Yassif at the Nuclear Threat Initiative told Science. It should be harder to induce AI systems to give detailed instructions on building bioweapons. But also, many of the security flaws that the AI systems inadvertently revealed like noting that users might contact DNA synthesis companies that dont screen orders, and so would be more likely to authorize a request to synthesize a dangerous virus are fixable!

We could require all DNA synthesis companies to do screening in all cases. We could also remove papers about dangerous viruses from the training data for powerful AI systems a solution favored by Esvelt. And we could be more careful in the future about publishing papers that give detailed recipes for building deadly viruses.

The good news is that positive actors in the biotech world are beginning to take this threat seriously. Ginkgo Bioworks, a leading synthetic biology company, has partnered with US intelligence agencies to develop software that can detect engineered DNA at scale, providing investigators with the means to fingerprint an artificially generated germ. That alliance demonstrates the ways that cutting-edge technology can protect the world against the malign effects of ... cutting-edge technology.

AI and biotech both have the potential to be tremendous forces for good in the world. And managing risks from one can also help with risks from the other for example, making it harder to synthesize deadly plagues protects against some forms of AI catastrophe just like it protects against human-mediated catastrophe. The important thing is that, rather than letting detailed instructions for bioterror get online as a natural experiment, we stay proactive and ensure that printing biological weapons is hard enough that no one can trivially do it, whether ChatGPT-aided or not.

A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!

See more here:

How AI like ChatGPT could be used to spark a pandemic - Vox.com

Related Posts

Comments are closed.