Giuseppe Garofalo, Vera Rimmer, Tim Van hamme, Davy Preuveneers, and Wouter Joosen, imec-DistriNet, KU Leuven
Face recognition systems are becoming a prevalent authentication solution on smartphones. This work is the first to deploy a poisoning attack against an authentication system based on a state-of-the-art face recognition technique. The attack is executed against the underlying SVM learning model that classifies face templates extracted by the FaceNet deep neural network. We demonstrate how an intelligent attacker can undermine the reliability of the authentication system through injecting a single intelligently crafted adversarial image to its training data. The most successful attacks within our evaluation framework trigger an authentication error of more than $50\%$. Our research illustrates the urge to evaluate and protect face authentication against adversarial machine learning.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.