Undercover Networks Protecting neural networks from differential power analysis

Published
Reading time
2 min read
Data related to neural networks

Neural networks can spill their secrets to those who know how to ask. A new approach secures them from prying eyes.

What’s new: Security researchers at North Carolina State University and Intel in a paper demonstrate that adversaries can find out a model’s parameter values by measuring its power use. They offer countermeasures that mask those values.

How it works: The authors show that neural networks, especially those designed to run on smart home speakers and other edge devices, are vulnerable to so-called differential power analysis. The researchers deciphered the weights of a binary neural network with three fully connected hidden layers of 1,024 neurons each by monitoring its power consumption over multiple inference operations.

  • To thwart such attacks, the researchers adapted a tactic from cryptography called masking. It involves instructing the neural network to randomly split its computations into two streams each time it runs and recombine the streams at the end of the run. In tests, this approach masked the weights.
  • The researchers propose another method called hiding, in which the system artificially turns its power consumption up while running sensitive operations. This makes it impossible for adversaries to measure power consumption.
  • The researchers say their masking and hiding methods are adaptable to any type of neural network.

Behind the news: Cryptography researchers wrote about differential power analysis and a related technique called simple power analysis as far back as 1998. Both techniques exploit the fact that computer processors use more energy to change a 0 to a 1 (or vice versa) than to maintain either value.

Yes, but: The countermeasures proposed by the researchers throttled the system’s performance, slowing it down by as much as 50 percent. The authors also worry that adversaries could find ways to analyze the two streams, forcing the defenders to split computations further with an even greater impact on performance.

Why it matters: The ability to reverse engineer a neural network’s weights makes it easier to create adversarial examples that fool the network, and to build knockoffs that put security and intellectual property at risk.

We’re thinking: Deep learning opening new use cases — but also new vulnerabilities that will require ongoing research to identify and counter.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox