Why Has Your Toaster Been through More Security Testing than the AI System Routing Your Car?

Note: Presentation times are in Pacific Standard Time (PST).

Wednesday, February 02, 2022 - 1:50 pm2:20 pm

Ram Shankar Siva Kumar, Microsoft


If you look under your toaster, you will find a sticker with the letters "UL" on it – this is a certification from "Underwriters Laboratory" promising that the toaster is relatively safe from spontaneous combustion.

Would it not be comforting to see a sticker under your smart device that it was robustly tested for security and privacy? Or a seal of approval attesting that it is robust from adversarial manipulations?

After all, if you want to know the security checks your router has passed, you can visit the manufactuerer's page and look under the security tab and get the details. Want to know how your bank’s mobile app is keeping your data safe? Just google your bank name and the words “security” and you can see detailed information on how they adhere to industry standards to safeguard your data.

So, what gives for AI systems? AI systems are deployed in some of the most critical areas including healthcare, finance, transportation, and even cybersecurity. Why don’t we have a concrete list of assurances from these AI vendors? Moreover, if AI is just software 2.0, shouldn’t all the existing standards and certifications just directly apply? Also, securing AI systems is a universal good, right?

Ram Shankar Siva Kumar, Microsoft

Ram Shankar Siva Kumar is a Data Cowboy in Azure Security at Microsoft, empowering engineers to secure machine learning systems. His work has appeared in industry conferences like RSA, BlackHat, Defcon, BlueHat, DerbyCon, MIRCon, Infiltrate, academic conferences like NeurIPS, ICLR, ICML, IEEE S&P, ACM - CCS and covered by Bloomberg, VentureBeat, Wired, and Geekwire. He founded the Adversarial ML Threat Matrix, an ATT&CK style framework enumerating threats to machine learning. His work on adversarial machine learning appeared notably in the National Security Commission on Artificial Intelligence (NSCAI) Final report presented to the United States Congress and the President. He is an affiliate at the Berkman Klein Center for Internet and Society at Harvard University and a Technical Advisory Board Member at the University of Washington. He is currently writing his book "AI's Achilles Heel," with Hyrum Anderson, enumerating security vulnerabilities in AI systems, and why addressing them is the next infosec imperative.

@conference {277343,
author = {Ram Shankar Siva Kumar},
title = {Why Has Your Toaster Been through More Security Testing than the {AI} System Routing Your Car?},
year = {2022},
address = {Santa Clara, CA},
publisher = {USENIX Association},
month = feb

Presentation Video