Tight Auditing of Differentially Private Machine Learning

Authors: 

Milad Nasr, Jamie Hayes, Thomas Steinke, and Borja Balle, Google DeepMind; Florian Tramèr, ETH Zurich; Matthew Jagielski, Nicholas Carlini, and Andreas Terzis, Google DeepMind

Distinguished Paper Award Winner

Abstract: 

Auditing mechanisms for differential privacy use probabilistic means to empirically estimate the privacy level of an algorithm. For private machine learning, existing auditing mechanisms are tight: the empirical privacy estimate (nearly) matches the algorithm's provable privacy guarantee. But these auditing techniques suffer from two limitations. First, they only give tight estimates under implausible worst-case assumptions (e.g., a fully adversarial dataset). Second, they require thousands or millions of training runs to produce nontrivial statistical estimates of the privacy leakage.

This work addresses both issues. We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets—if the adversary can see all model updates during training. Prior auditing works rely on the same assumption, which is permitted under the standard differential privacy threat model. This threat model is also applicable, e.g., in federated learning settings. Moreover, our auditing scheme requires only two training runs (instead of thousands) to produce tight privacy estimates, by adapting recent advances in tight composition theorems for differential privacy. We demonstrate the utility of our improved auditing schemes by surfacing implementation bugs in private machine learning code that eluded prior auditing techniques.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {291245,
author = {Milad Nasr and Jamie Hayes and Thomas Steinke and Borja Balle and Florian Tram{\`e}r and Matthew Jagielski and Nicholas Carlini and Andreas Terzis},
title = {Tight Auditing of Differentially Private Machine Learning},
booktitle = {32nd USENIX Security Symposium (USENIX Security 23)},
year = {2023},
isbn = {978-1-939133-37-3},
address = {Anaheim, CA},
pages = {1631--1648},
url = {https://www.usenix.org/conference/usenixsecurity23/presentation/nasr},
publisher = {USENIX Association},
month = aug
}

Presentation Video