Industrial practitioners' mental models of adversarial machine learning


Lukas Bieringer, QuantPi; Kathrin Grosse, University of Cagliari; Michael Backes, CISPA Helmholtz Center for Information Security; Battista Biggio, University of Cagliari, Pluribus One; Katharina Krombholz, CISPA Helmholtz Center for Information Security


Although machine learning is widely used in practice, little is known about practitioners' understanding of potential security challenges. In this work, we close this substantial gap and contribute a qualitative study focusing on developers' mental models of the machine learning pipeline and potentially vulnerable components. Similar studies have helped in other security fields to discover root causes or improve risk communication. Our study reveals two facets of practitioners' mental models of machine learning security. Firstly, practitioners often confuse machine learning security with threats and defences that are not directly related to machine learning. Secondly, in contrast to most academic research, our participants perceive security of machine learning as not solely related to individual models, but rather in the context of entire workflows that consist of multiple components. Jointly with our additional findings, these two facets provide a foundation to substantiate mental models for machine learning security and have implications for the integration of adversarial machine learning into corporate workflows, decreasing practitioners' reported uncertainty, and appropriate regulatory frameworks for machine learning security.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

@inproceedings {281210,
author = {Lukas Bieringer and Kathrin Grosse and Michael Backes and Battista Biggio and Katharina Krombholz},
title = {Industrial practitioners{\textquoteright} mental models of adversarial machine learning},
booktitle = {Eighteenth Symposium on Usable Privacy and Security (SOUPS 2022)},
year = {2022},
isbn = {978-1-939133-30-4},
address = {Boston, MA},
pages = {97--116},
url = {},
publisher = {USENIX Association},
month = aug

Presentation Video