Interpretable Deep Learning under Fire

Authors: 

Xinyang Zhang, Pennsylvania State University; Ningfei Wang, University of California Irvine; Hua Shen, Pennsylvania State University; Shouling Ji, Zhejiang University and Alibaba-ZJU Joint Institute of Frontier Technologies; Xiapu Luo, Hong Kong Polytechnic University; Ting Wang, Pennsylvania State University

Abstract: 

Providing explanations for deep neural network (DNN) models is crucial for their use in security-sensitive domains. A plethora of interpretation models have been proposed to help users understand the inner workings of DNNs: how does a DNN arrive at a specific decision for a given input? The improved interpretability is believed to offer a sense of security by involving human in the decision-making process. Yet, due to its data-driven nature, the interpretability itself is potentially susceptible to malicious manipulations, about which little is known thus far.

Here we bridge this gap by conducting the first systematic study on the security of interpretable deep learning systems (IDLSes). We show that existing IDLSes are highly vulnerable to adversarial manipulations. Specifically, we present ADV2, a new class of attacks that generate adversarial inputs not only misleading target DNNs but also deceiving their coupled interpretation models. Through empirical evaluation against four major types of IDLSes on benchmark datasets and in security-critical applications (e.g., skin cancer diagnosis), we demonstrate that with ADV2 the adversary is able to arbitrarily designate an input's prediction and interpretation. Further, with both analytical and empirical evidence, we identify the prediction-interpretation gap as one root cause of this vulnerability -- a DNN and its interpretation model are often misaligned, resulting in the possibility of exploiting both models simultaneously. Finally, we explore potential countermeasures against ADV2, including leveraging its low transferability and incorporating it in an adversarial training framework. Our findings shed light on designing and operating IDLSes in a more secure and informative fashion, leading to several promising research directions.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {244036,
author = {Xinyang Zhang and Ningfei Wang and Hua Shen and Shouling Ji and Xiapu Luo and Ting Wang},
title = {Interpretable Deep Learning under Fire},
booktitle = {29th {USENIX} Security Symposium ({USENIX} Security 20)},
year = {2020},
isbn = {978-1-939133-17-5},
pages = {1659--1676},
url = {https://www.usenix.org/conference/usenixsecurity20/presentation/zhang-xinyang},
publisher = {{USENIX} Association},
month = aug,
}
View the slides