Secure Floating-Point Training


Deevashwer Rathee, University of California, Berkeley; Anwesh Bhattacharya, Divya Gupta, and Rahul Sharma, Microsoft Research; Dawn Song, University of California, Berkeley


Secure 2-party computation (2PC) of floating-point arithmetic is improving in performance and recent work runs deep learning algorithms with it, while being as numerically precise as commonly used machine learning (ML) frameworks like PyTorch. We find that the existing 2PC libraries for floating-point support generic computations and lack specialized support for ML training. Hence, their latency and communication costs for compound operations (e.g., dot products) are high. We provide novel specialized 2PC protocols for compound operations and prove their precision using numerical analysis. Our implementation BEACON outperforms state-of-the-art libraries for 2PC of floating-point by over $6\times$.

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

@inproceedings {287127,
author = {Deevashwer Rathee and Anwesh Bhattacharya and Divya Gupta and Rahul Sharma and Dawn Song},
title = {Secure {Floating-Point} Training},
booktitle = {32nd USENIX Security Symposium (USENIX Security 23)},
year = {2023},
isbn = {978-1-939133-37-3},
address = {Anaheim, CA},
pages = {6329--6346},
url = {},
publisher = {USENIX Association},
month = aug

Presentation Video