Fei Zuo, Bokai Yang, Xiaopeng Li, Lannan Luo, and Qiang Zeng, University of South Carolina
Despite the great achievements made by neural networks on tasks such as image classification, they are brittle and vulnerable to adversarial example (AE) attacks, which are crafted by adding human-imperceptible perturbations to inputs in order that a neural-network-based classifier incorrectly labels them. In particular, L0 AEs are a category of widely discussed threats where adversaries are restricted in the number of pixels that they can corrupt. However, our observation is that, while L0 attacks modify as few pixels as possible, they tend to cause large-amplitude perturbations to the modified pixels. We consider this as an inherent limitation of L0 AEs, and thwart such attacks by both detecting and rectifying them. The main novelty of the proposed detector is that we convert the AE detection problem into a comparison problem by exploiting the inherent limitation of L0 attacks. More concretely, given an image I, it is pre-processed to obtain another image I'. A Siamese network, which is known to be effective in comparison, takes I and I' as the input pair to determine whether I is an AE. A trained Siamese network automatically and precisely captures the discrepancies between I and I' to detect L0 perturbations. In addition, we show that the pre-processing technique, inpainting, used for detection can also work as an effective defense, which has a high probability of removing the adversarial influence of L0 perturbations. Thus, our system, called AEPECKER, demonstrates not only high AE detection accuracies, but also a notable capability to correct the classification results.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.
author = {Fei Zuo and Bokai Yang and Xiaopeng Li and Qiang Zeng},
title = {Exploiting the Inherent Limitation of L0 Adversarial Examples},
booktitle = {22nd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2019)},
year = {2019},
isbn = {978-1-939133-07-6},
address = {Chaoyang District, Beijing},
pages = {293--307},
url = {https://www.usenix.org/conference/raid2019/presentation/zuo},
publisher = {USENIX Association},
month = sep
}