Adversarial Examples in Machine Learning

Wednesday, February 1, 2017 - 2:30pm3:00pm

Nicolas Papernot, Google PhD Fellow at The Pennsylvania State University

Abstract: 

Machine learning models, including deep neural networks, were shown to be vulnerable to adversarial examples—subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software.

In fact, the feasibility of misclassification attacks based on adversarial examples has been shown for image, text, and malware classifiers. Furthermore, adversarial examples that affect one model often affect another model, even if the two models are very different. This effectively enables attackers to target remotely hosted victim classifiers with very little adversarial knowledge.

This talk covers adversarial example crafting algorithms operating under varying threat models and application domains, as well as defenses proposed to mitigate such attacks. A practical tutorial will be given throughout the talk, allowing participants to familiarize themselves with adversarial example crafting.

Nicolas Papernot, Google PhD Fellow, The Pennsylvania State University

Nicolas Papernot is a PhD student in Computer Science and Engineering working with Dr. Patrick McDaniel at the Pennsylvania State University. His research interests lie at the intersection of computer security and deep learning. He is supported by a Google PhD Fellowship in Security. In 2016, he received his MS in Computer Science and Engineering from the Pennsylvania State University and his MS in Engineering Sciences from the École Centrale de Lyon.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@conference {202478,
author = {Nicolas Papernot},
title = {Adversarial Examples in Machine Learning},
year = {2017},
address = {Oakland, CA},
publisher = {USENIX Association},
month = jan
}

Presentation Video