Artificial Intelligence Is Now Smart Enough to Know When It Can’t Be Trusted – ScienceAlert

How might The Terminator have played out if Skynet had decided it probably wasn't responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they're untrustworthy.

These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of factors in balance with each other, spotting patterns in masses of data that humans don't have the capacity to analyse.

While Skynet might still be some way off, AI is already making decisions in fields that affect human lives like autonomous driving and medical diagnosis, and that means it's vital that they're as accurate as possible. To help towards this goal, this newly created neural network system can generate its confidence level as well as its predictions.

"We need the ability to not only have high-performance models, but also to understand when we cannot trust those models," says computer scientist Alexander Aminifrom the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

This self-awareness of trustworthiness has been given the name Deep Evidential Regression, and it bases its scoring on the quality of the available data it has to work with the more accurate and comprehensive the training data, the more likely it is that future predictions are going to work out.

The research team compares it to a self-driving car having different levels of certainty about whether to proceed through a junction or whether to wait, just in case, if the neural network is less confident in its predictions. The confidence rating even includes tips for getting the rating higher (by tweaking the network or the input data, for instance).

While similar safeguards have been built into neural networks before, what sets this one apart is the speed at which it works, without excessive computing demands it can be completed in one run through the network, rather than several, with a confidence level outputted at the same time as a decision.

"This idea is important and applicable broadly," says computer scientist Daniela Rus. "It can be used to assess products that rely on learned models. By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model."

The researchers tested their new system by getting it to judge depths in different parts of an image, much like a self-driving car might judge distance. The network compared well to existing setups, while also estimating its own uncertainty the times it was least certain were indeed the times it got the depths wrong.

As an added bonus, the network was able to flag up times when it encountered images outside of its usual remit (so very different to the data it had been trained on) which in a medical situation could mean getting a doctor to take a second look.

Even if a neural network is right 99 percent of the time, that missing 1 percent can have serious consequences, depending on the scenario. The researchers say they're confident that their new, streamlined trust test can help improve safety in real time, although the work has not yet been peer-reviewed.

"We're starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences," says Amini.

"Any user of the method, whether it's a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision."

The research is being presented at the NeurIPS conference in December, and anonline paperis available.

Visit link:

Artificial Intelligence Is Now Smart Enough to Know When It Can't Be Trusted - ScienceAlert

Related Posts

Comments are closed.