Classifiers under Attack

Wednesday, February 1, 2017 - 2:00pm2:30pm

David Evans, University of Virginia


Machine learning classifiers are widely used in security applications, and often achieve outstanding performance in testing. When deployed, however, classifiers can often be thwarted by motivated adversaries who can construct evasive variants which are misclassified as benign. The main reason for this is that classifiers are trained on samples collected from previous attacks, which often differ from benign samples in superficial and easily-modified ways. Further, many machine learning techniques, including deep neural networks, are inherently fragile. In this talk, I’ll highlight the reasons most classifiers can be evaded by motivated adversaries and demonstrate some successful evasion techniques, including ones that can be fully automated. Then, I’ll talk about methods that could be used to make classifiers less vulnerable to evasion and to evaluate the robustness of a deployed classifiers in the presence of adversaries.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

@conference {201685,
author = {David Evans},
title = {Classifiers under Attack},
year = {2017},
address = {Oakland, CA},
publisher = {{USENIX} Association},