X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection


Aishan Liu and Jun Guo, Beihang University; Jiakai Wang, Zhongguancun Laboratory; Siyuan Liang, Chinese Academy of Sciences; Renshuai Tao, Beihang University; Wenbo Zhou, University of Science and Technology of China; Cong Liu, iFLYTEK; Xianglong Liu, Beihang University, Zhongguancun Laboratory, and Hefei Comprehensive National Science Center; Dacheng Tao, JD Explore Academy


Adversarial attacks are valuable for evaluating the robustness of deep learning models. Existing attacks are primarily conducted on the visible light spectrum (e.g., pixel-wise texture perturbation). However, attacks targeting texture-free X-ray images remain underexplored, despite the widespread application of X-ray imaging in safety-critical scenarios such as the X-ray detection of prohibited items. In this paper, we take the first step toward the study of adversarial attacks targeted at X-ray prohibited item detection, and reveal the serious threats posed by such attacks in this safety-critical scenario. Specifically, we posit that successful physical adversarial attacks in this scenario should be specially designed to circumvent the challenges posed by color/texture fading and complex overlapping. To this end, we propose X-Adv to generate physically printable metals that act as an adversarial agent capable of deceiving X-ray detectors when placed in luggage. To resolve the issues associated with color/texture fading, we develop a differentiable converter that facilitates the generation of 3D-printable objects with adversarial shapes, using the gradients of a surrogate model rather than directly generating adversarial textures. To place the printed 3D adversarial objects in luggage with complex overlapped instances, we design a policy-based reinforcement learning strategy to find locations eliciting strong attack performance in worst-case scenarios whereby the prohibited items are heavily occluded by other items. To verify the effectiveness of the proposed X-Adv, we conduct extensive experiments in both the digital and the physical world (employing a commercial X-ray security inspection system for the latter case). Furthermore, we present the physical-world X-ray adversarial attack dataset XAD. We hope this paper will draw more attention to the potential threats targeting safety-critical scenarios. Our codes and XAD dataset are available at https://github.com/DIG-Beihang/X-adv.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

@inproceedings {287200,
author = {Aishan Liu and Jun Guo and Jiakai Wang and Siyuan Liang and Renshuai Tao and Wenbo Zhou and Cong Liu and Xianglong Liu and Dacheng Tao},
title = {{X-Adv}: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection},
booktitle = {32nd USENIX Security Symposium (USENIX Security 23)},
year = {2023},
isbn = {978-1-939133-37-3},
address = {Anaheim, CA},
pages = {3781--3798},
url = {https://www.usenix.org/conference/usenixsecurity23/presentation/liu-aishan},
publisher = {USENIX Association},
month = aug

Presentation Video