Bolun Wang, UC Santa Barbara; Yuanshun Yao, University of Chicago; Bimal Viswanath, Virginia Tech; Haitao Zheng and Ben Y. Zhao, University of Chicago
Transfer learning is a powerful approach that allows users to quickly build accurate deep-learning (Student) models by "learning" from centralized (Teacher) models pretrained with large datasets, e.g. Google's InceptionV3. We hypothesize that the centralization of model training increases their vulnerability to misclassification attacks leveraging knowledge of publicly accessible Teacher models. In this paper, we describe our efforts to understand and experimentally validate such attacks in the context of image recognition. We identify techniques that allow attackers to associate Student models with their Teacher counterparts, and launch highly effective misclassification attacks on black-box Student models. We validate this on widely used Teacher models in the wild. Finally, we propose and evaluate multiple approaches for defense, including a neuron-distance technique that successfully defends against these attacks while also obfuscates the link between Teacher and Student models.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.
author = {Bolun Wang and Yuanshun Yao and Bimal Viswanath and Haitao Zheng and Ben Y. Zhao},
title = {With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning},
booktitle = {27th USENIX Security Symposium (USENIX Security 18)},
year = {2018},
isbn = {978-1-939133-04-5},
address = {Baltimore, MD},
pages = {1281--1297},
url = {https://www.usenix.org/conference/usenixsecurity18/presentation/wang-bolun},
publisher = {USENIX Association},
month = aug
}