How to Build Realistic Machine Learning Systems for Security?

Monday, January 27, 2020 - 5:00 pm5:30 pm

Sadia Afroz, ICSI, Avast

Abstract: 

Given the existence of adversarial attacks and fairness biases, a question might arise if machine learning is useful for security at all. In this talk, we will discuss how to build robust machine learning systems to defend against real-world attacks. We focus on building machine learning-based malware detectors. We address the necessity of considering RoC curves where the FP rates need to lie well below 1%. Achieving this in the presence of a polluted ground truth set where 10–30% of data is unlabeled and 2–5% of labels are incorrect is a true challenge. When a dynamic model is built, testing it against a repository of malware is impossible, since most malware is ephemeral and may no longer exhibit the malicious property. Finally, we discuss how to model realistic adversaries for adversarial attacks and defenses.

Sadia Afroz, ICSI, Avast

Sadia Afroz is a research scientist at the International Computer Science Institute (ICSI) and Avast Software. Her work focuses on anti-censorship, anonymity and adversarial learning. Her work on adversarial authorship attribution received the 2013 Privacy Enhancing Technology (PET) award, the best student paper award at the 2012 Privacy Enhancing Technology Symposium (PETS) and the 2014 ACM SIGSAC dissertation award (runner-up). More about her research can be found: http://www1.icsi.berkeley.edu/~sadia/

BibTeX
@conference {244706,
author = {Sadia Afroz},
title = {How to Build Realistic Machine Learning Systems for Security?},
year = {2020},
address = {San Francisco, CA},
publisher = {{USENIX} Association},
month = jan,
}