Sanghyun Hong, University of Maryland, College Park and Maryland Cybersecurity Center (MC2)
The widespread adoption of machine learning (ML) incentivizes potential adversaries who wish to manipulate systems that include ML components. In consequence, research in the field of adversarial machine learning studies attack surfaces such as training data with bad samples (data poisoning) and predictions manipulated by imperceptible perturbations (adversarial examples). However, most of the prior work focuses on the "soundness of mind" by looking at ML as a mathematical concept, and it overlooks the security threats caused by practical hardware attacks such as fault injection or side-channel attacks.
In this talk, we will propose a new perspective: we view ML as a computational tool running on hardware, a potentially "vulnerable body". We will introduce the emerging research on the vulnerabilities of ML models to practical hardware attacks. These attacks can cause unexpected damage, and ultimately, they shed new light on the dangers of hardware-based attack vectors.
First, we review the impact of fault-injection attacks. We show that, by flipping a single bit in the memory representation of a deep neural network (DNN), we can degrade the prediction accuracy by 90% or more. Contrary to the conventional wisdom in the ML community---that DNNs are resilient to parameter perturbations---we find that this vulnerability is widespread in modern DNNs. An adversary can exploit the vulnerability in practice with a software-induced fault attack, Rowhammer; we demonstrate that, even if the attacker randomly flips bits in memory, the attacker inflicted the accuracy drop more than 10% within a minute.
Second, we review the impact of side-channel attacks. We show that a typical cache side-channel attacker can reverse-engineer the architecture details of a DNN model. To this end, the attacker exploits the computational regularities in ML frameworks: DNN layers are processed sequentially, and the time it takes to process them depends on the architecture configurations. In practice, our attacker was able to steal two essential components of deep learning systems: a data preprocessing pipeline and a custom DNN architecture. They are potentially proprietary, as their development requires substantial resources. Nevertheless, the leakage from a cache side-channel, while the victim processes a single input, is sufficient for reconstructing them with 0% error.
Finally, we conclude by emphasizing the vulnerability of ML to hardware attacks is as yet an under-studied topic; thus, we encourage the community to re-examine security properties guaranteed by previous works with a new angle. Separately, some properties of ML particularly make the exploitation of hardware attacks easy; therefore, we need to consider additional ML-level defenses that account for robust properties. We believe this is the best moment to pursue the ancient wisdom: "a sound mind in a sound body"
Note: we have our website that outlines our research: http://hardwarefail.ml
Sanghyun Hong, University of Maryland, College Park and Maryland Cybersecurity Center (MC2)
Sanghyun Hong is a Ph.D. candidate in Computer Science at the University of Maryland, College Park (UMD), advised by Professor Tudor Dumitras. His research interests span the security and privacy of machine learning (ML). In his dissertation research, he exposed the vulnerability of deep learning algorithms to hardware attack vectors, such as Rowhammer or side-channel attacks. He also worked on identifying hidden properties within deep learning algorithms, such as overthinking and gradient-level disparity, whose quantification led to defensive mechanisms against backdoor or data poisoning attacks, respectively. His research outcomes are published in security and ML conferences: USENIX, ICLR, ICML, and NeurIPs. He is a recipient of the Ann G. Wylie Dissertation Fellowship and is currently a Future Faculty Fellow in A. James Clark School of Engineering at UMD. He is on the academic job market this year.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.
author = {Sanghyun Hong},
title = {A Sound Mind in a Vulnerable Body: Practical Hardware Attacks on Deep Learning},
year = {2021},
publisher = {USENIX Association},
month = feb
}