Scaling Distributed Machine Learning with In-Network Aggregation

Authors: 

Amedeo Sapio, Marco Canini, and Chen-Yu Ho, KAUST; Jacob Nelson, Microsoft; Panos Kalnis, KAUST; Changhoon Kim, Barefoot Networks; Arvind Krishnamurthy, University of Washington; Masoud Moshref, Barefoot Networks; Dan Ports, Microsoft; Peter Richtarik, KAUST

Abstract: 

Training machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide an efficient solution that speeds up training by up to 5.5 times for a number of real-world benchmark models.

NSDI '21 Open Access Sponsored by NetApp

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {265065,
author = {Amedeo Sapio and Marco Canini and Chen-Yu Ho and Jacob Nelson and Panos Kalnis and Changhoon Kim and Arvind Krishnamurthy and Masoud Moshref and Dan Ports and Peter Richtarik},
title = {Scaling Distributed Machine Learning with {In-Network} Aggregation},
booktitle = {18th USENIX Symposium on Networked Systems Design and Implementation (NSDI 21)},
year = {2021},
isbn = {978-1-939133-21-2},
pages = {785--808},
url = {https://www.usenix.org/conference/nsdi21/presentation/sapio},
publisher = {USENIX Association},
month = apr
}

Presentation Video