Adversarial Example Defense: Ensembles of Weak Defenses are not Strong


Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song, UC Berkeley


Ongoing research has proposed several methods to defend neural networks against adversarial examples, many of which researchers have shown to be ineffective. We ask whether a strong defense can be created by combining multiple (possibly weak) defenses. To answer this question, we study three defenses that follow this approach. Two of these are recently proposed defenses that intentionally combine components designed to work well together. A third defense combines three independent defenses. For all the components of these defenses and the combined defenses themselves, we show that an adaptive adversary can create adversarial examples successfully with low distortion. Thus, our work implies that ensemble of weak defenses is not sufficient to provide strong defense against adversarial examples.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

@inproceedings {206180,
author = {Warren He and James Wei and Xinyun Chen and Nicholas Carlini and Dawn Song},
title = {Adversarial Example Defense: Ensembles of Weak Defenses are not Strong},
booktitle = {11th {USENIX} Workshop on Offensive Technologies ({WOOT} 17)},
year = {2017},
address = {Vancouver, BC},
url = {},
publisher = {{USENIX} Association},