“Security is not my field, I’m a stats guy”: A Qualitative Root Cause Analysis of Barriers to Adversarial Machine Learning Defenses in Industry

Authors: 

Jaron Mink, University of Illinois at Urbana-Champaign; Harjot Kaur, Leibniz University Hannover; Juliane Schmüser and Sascha Fahl, CISPA Helmholtz Center for Information Security; Yasemin Acar, Paderborn University and George Washington University

Abstract: 

Adversarial machine learning (AML) has the potential to leak training data, force arbitrary classifications, and greatly degrade overall performance of machine learning models, all of which academics and companies alike consider as serious issues. Despite this, seminal work has found that most organizations insufficiently protect against such threats. While the lack of defenses to AML is most commonly attributed to missing knowledge, it is unknown why mitigations are unrealized in industry projects. To better understand the reasons behind the lack of deployed AML defenses, we conduct semi-structured interviews (n=21) with data scientists and data engineers to explore what barriers impede the effective implementation of such defenses. We find that practitioners’ ability to deploy defenses is hampered by three primary factors: a lack of institutional motivation and educational resources for these concepts, an inability to adequately assess their AML risk and make subsequent decisions, and organizational structures and goals that discourage implementation in favor of other objectives. We conclude by discussing practical recommendations for companies and practitioners to be made more aware of these risks, and better prepared to respond.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {287284,
author = {Jaron Mink and Harjot Kaur and Juliane Schm{\"u}ser and Sascha Fahl and Yasemin Acar},
title = {{{\textquotedblleft}Security} is not my field, {I{\textquoteright}m} a stats {guy{\textquotedblright}}: A Qualitative Root Cause Analysis of Barriers to Adversarial Machine Learning Defenses in Industry},
booktitle = {32nd USENIX Security Symposium (USENIX Security 23)},
year = {2023},
isbn = {978-1-939133-37-3},
address = {Anaheim, CA},
pages = {3763--3780},
url = {https://www.usenix.org/conference/usenixsecurity23/presentation/mink},
publisher = {USENIX Association},
month = aug
}

Presentation Video