New Research Claims to Have Found a Solution to Machine Learning Attacks – Analytics Insight

AI has been making some major strides in the computing world in recent years. But that also means they have become increasingly vulnerable to security concerns. Just by examining the power usage patterns or signatures during operations, one may able to gain access to sensitive information housed by a computer system. And in AI, machine learning algorithms are more prone to such attacks. The same algorithms are employed in smart home devices, cars to identify different forms of images and sounds that are embedded with specialized computing chips.

These chips rely on using neural networks, instead of a cloud computing server located in a data center miles away. Due to such physical proximity, the neural networks can perform computations, at a faster rate, with minimal delay. This also makes it simple for hackers to reverse-engineer the chips inner workings using a method known as differential power analysis (DPA). Thereby, it is a warning threat for the Internet of Things/edge devices because of their power signatures or electromagnetic radiation signage. If leaked, the neural model, including weights, biases, and hyper-parameters, can violate data privacy and intellectual property rights.

Recently a team of researchers of North Carolina State University presented a preprint paper at the 2020 IEEE International Symposium on Hardware Oriented Security and Trust in San Jose, California. The paper mentions about the DPA framework to neural-network classiers. First, it shows DPA attacks during inference to extract the secret model parameters such as weights and biases of a neural network. Second, it proposes the rst countermeasures against these attacks by augmenting masking. The resulting design uses novel masked components such as masked adder trees for fully connected layers and masked Rectier Linear Units for activation functions. The team is led by Aydin Aysu, an assistant professor of electrical and computer engineering at North Carolina State University in Raleigh.

While DPA attacks have been successful against targets like the cryptographic algorithms that safeguard digital information and the smart chips found in ATM cards or credit cards, the team observes neural networks as possible targets, with perhaps even more profitable payoffs for the hackers or rival competitors. They can further unleash adversarial machine learning attacks that can confuse the existing neural network

The team focused on common and simple binarized neural networks (an efcient network for IoT/edge devices with binary weights and activation values) that are adept at doing computations with less computing resources. They began by demonstrating how power consumption measurements can be exploited to reveal the secret weight and values that help determine a neural networks computations. Using random known inputs, for multiple numbers of time, the adversary computes the corresponding power activity on an intermediate estimate of power patterns linked with the secret weight values of BNN, in a highly-parallelized hardware implementation.

Then the team designed a countermeasure to secure the neural network against such an attack via masking (an algorithm-level defense that can produce resilient designs independent of the implementation technology). This is done by splitting intermediate computations into two randomized shares that are different each time the neural network runs the same intermediate computation. This prevents an attacker from using a single intermediate computation to analyze different power consumption patterns. While the process requires tuning for protecting specific machine learning models, they can be executed on any form of computer chip that runs on a neural network, viz., Field Programmable Gate Arrays (FPGA), and Application-specific Integrated Circuits (ASIC). Under this defense technique, a binarized neural network requires the hypothetical adversary to perform 100,000 sets of power consumption measurements instead of just 200.

However, there are certain main concerns involved in the masking technique. During initial masking, the neural networks performance dropped by 50 percent and needed nearly double the computing area on the FPGA chip. Second, the team expressed the possibility of attackers avoid the basic masking defense by analyzing multiple intermediate computations instead of a single computation, thus leading to a computational arms race where they are split into further shares. Adding more security to them can be time-consuming.

Despite this, we still need active countermeasures against DPA attacks. Machine Learning (ML) is a critical new target with several motivating scenarios to keep the internal ML model secret. While Aysu explains that research is far from done, his research is supported by both the U.S. National Science Foundation and the Semiconductor Research Corporations Global Research Collaboration. He anticipates receiving funding to continue this work for another five years and hopes to enlist more Ph.D. students interested in the effort.

Interest in hardware security is increasing because, at the end of the day, the hardware is the root of trust, Aysu says. And if the root of trust is gone, then all the security defenses at other abstraction levels will fail.

More here:
New Research Claims to Have Found a Solution to Machine Learning Attacks - Analytics Insight

Related Posts

Comments are closed.