Evil AI: These are the 20 most dangerous crimes that artificial intelligence will create – ZDNet

From targeted phishing campaigns to new stalking methods: there are plenty of ways that artificial intelligence could be used to cause harm if it fell into the wrong hands. A team of researchers decided to rank the potential criminal applications that AI will have in the next 15 years, starting with those we should worry the most about. At the top of the list of most serious threats? Deepfakes.

By using fake audio and video to impersonate another person, the technology can cause various types of harms, said the researchers. The threats range from discrediting public figures to influence public opinion, to extorting funds by impersonating someone's child or relatives over a video call.

The ranking was put together after scientists from University College London (UCL) compiled a list of 20 AI-enabled crimes based on academic papers, news and popular culture, and got a few dozen experts to discuss the severity of each threat during a two-day seminar.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

The participants were asked to rank the list in order of concern, based on four criteria: the harm it could cause, the potential for criminal profit or gain, how easy the crime could be carried out and how difficult it would be to stop.

Although deepfakes might in principle sound less worrying than, say, killer robots, the technology is capable of causing a lot of harm very easily, and is hard to detect and stop. Relative to other AI-enabled tools, therefore, the experts established that deepfakes are the most serious threat out there.

There are already examples of fake content undermining democracy in some countries: in the US, for example, a doctored video of House Speaker Nancy Pelosi in which she appeared inebriated picked up more than 2.5 million views on Facebook last year.

UK organization Future Advocacy similarly used AI to create a fake video during the 2019 general election, which showed Boris Johnson and Jeremy Corbyn endorsing each other for prime minister. Although the video was not malicious, it stressed the potential of deepfakes to impact national politics.

The UCL researchers said that as deepfakes get more sophisticated and credible, they will only get harder to defeat. While some algorithms are already successfully identifying deepfakes online, there are many uncontrolled routes for modified material to spread. Eventually, warned the researchers, this will lead to widespread distrust of audio and visual content.

Five other applications of AI also made it to the "highly worrying" category. With autonomous cars just around the corner, driverless vehicles were identified as a realistic delivery mechanism for explosives, or even as weapons of terror in their own right. Equally achievable is the use of AI to author fake news: the technology already exists, stressed the report, and the societal impact of propaganda shouldn't be under-estimated.

Also keeping AI experts up at night are applications that will be so pervasive that defeating them will be near impossible. This is the case for AI-infused phishing attacks, for example, which will be perpetrated via crafty messages that will be impossible to distinguish from reality. Another example is large-scale blackmail, enabled by AI's potential to harvest large personal datasets and information from social media.

Finally, participants pointed to the multiplication of AI systems used for key applications like public safety or financial transactions and to the many opportunities for attack they represent. Disrupting such AI-controlled systems, for criminal or terror motives, could result in widespread power failures, breakdown of food logistics, and overall country-wide chaos.

UCL's researchers labelled some of the other crimes that could be perpetrated with the help of AI as only "moderately concerning". Among them feature the sale of fraudulent "snake-oil" AI for popular services like lie detection or security screening, or increasingly sophisticated learning-based cyberattacks, in which AI could easily probe the weaknesses of many systems.

Several of the crimes cited could arguably be seen as a reason for high concern. For example, the misuse of military robots, or the deliberate manipulation of databases to introduce bias, were both cited as only moderately worrying.

The researchers argued, however, that such applications seem too difficult to push at scale in current times, or could be easily managed, and therefore do not represent as imminent a danger.

SEE: AI, machine learning to dominate CXO agenda over next 5 years

At the bottom of the threat hierarchy, the researchers listed some "low-concern" applications the petty crime of AI, if you may. On top of fake reviews or fake art, the report also mentions burglar bots, small devices that could sneak into homes through letterboxes or cat flaps to relay information to a third party.

Burglar bots might sound creepy, but they could be easily defeated in fact, they could pretty much be stopped by a letterbox cage and they couldn't scale. As such, the researchers don't expect that they will cause huge trouble anytime soon. The real danger, according to the report, lies in criminal applications of AI that could be easily shared and repeated once they are developed.

UCL's Matthew Caldwell, first author of the report, said: "Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime."

The marketisation of AI-enabled crime, therefore, might be just around the corner. Caldwell and his team anticipate the advent of "Crime as a Service" (CaaS), which would work hand-in-hand with Denial of Service (DoS) attacks.

And some of these crimes will have deeper ramifications than others. Here is the complete ranking of AI-enabled crimes to look out for, as compiled by UCL's researchers:

The rest is here:
Evil AI: These are the 20 most dangerous crimes that artificial intelligence will create - ZDNet

Related Posts

Comments are closed.