Auditing Data Privacy for Machine Learning

Note: Presentation times are in Pacific Standard Time (PST).

Wednesday, February 02, 2022 - 3:50 pm4:20 pm

Reza Shokri, National University of Singapore

Large machine learning models (e.g., deep language models) memorize a significant amount of information about the individual data records in their training set. Recent inference attacks against machine learning algorithms demonstrate how an adversary can extract sensitive information about a model’s training data, by having access to its parameters or predictions. Specifically, these algorithms reflect the identification risk of models by detecting the presence of data records in the training set of a model (hence called membership inference attacks). These attacks can measure the information leakage of models about the data in their training set, thus can be used as a measure to audit privacy risks of machine learning algorithms. Based on the results of these attacks on many real world systems and datasets (e.g., Google and Amazon ML as a service platforms, federated learning algorithms, and models trained on sensitive datasets such as text data, medical, location, purchase history, image data, etc), we conclude that large models pose a significant risk to data privacy of individuals, and need to be considered as some type of personal data. Thus, we need carefully designed methodologies and tools to audit data privacy risk in machine learning in a wide range of applications. Guidances released by the European Commission and the White House call for protection of personal data during all the phases of deploying AI systems and build systems that are resistant to attacks. Recent reports published by the Information Commissioner’s Office (ICO) for auditing AI and the National Institute of Standards and Technology (NIST) for securing applications of Artificial Intelligence also highlight the privacy risk to data from machine learning models. And they specifically mention membership inference as a confidentiality violation and potential threat to the training data from models. It is recommended in the auditing framework by ICO for organizations to identify these threats and take measures to minimize the risk. As the ICO’s investigation teams will be using this framework to assess the compliance with data protection laws, organizations must account for and estimate the privacy risks to data through models. To this end, we have developed an open source tool, named ML Privacy Meter, based on membership inference algorithms, and also tech companies are using similar algorithms to analyze privacy risk in machine learning algorithms. For example, ML privacy meter and similar tools can help in data protection impact assessment (DPIA) by providing a quantitative assessment of privacy risk of a machine learning model. The tool can generate extensive privacy reports about the aggregate-level and individual-level risk with respect to training data records. It can estimate the amount of information that is revealed through the predictions of a model (when deployed) or its parameters (when shared). Hence, when providing query access to the model or revealing the entire model, the tool can be used to assess the potential threats to training data.

In this talk, I will talk about what exactly privacy risk is and what it is not, the difference between privacy and confidentiality (which can be easily confused), the reasons models are vulnerable to inference attacks, the methodology for quantifying privacy risk in machine learning, and examples of how ML privacy meter and similar tools can enable detailed auditing of ML systems. I will show the very fundamental and intuitive relation of the auditing mechanisms and defense mechanisms for privacy (e.g., differential privacy).

It is very important to ML engineers, policymakers, and researchers to be aware of the risks, their implications, and the methodology for auditing the privacy risk for different types of machine learning algorithms. This can pave the way for privacy by design for machine learning.

Reza Shokri, National University of Singapore

Reza Shokri is a NUS Presidential Young Professor of Computer Science. His research focuses on data privacy and trustworthy machine learning. He is a recipient of the IEEE Security and Privacy (S&P) Test-of-Time Award 2021, for his paper on quantifying location privacy. He received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies in 2018, for his work on analyzing the privacy risks of machine learning models. He received the NUS Early Career Research Award 2019, VMWare Early Career Faculty Award 2021, and Intel Faculty Research Award (Private AI Collaborative Research Institute) 2021, 2022. He obtained his PhD from EPFL.
@conference {277402,
author = {Reza Shokri},
title = {Auditing Data Privacy for Machine Learning},
year = {2022},
address = {Santa Clara, CA},
publisher = {USENIX Association},
month = feb

Presentation Video