Huanzhou Zhu, Imperial College London; Bo Zhao, Imperial College London and Aalto University; Gang Chen, Weifeng Chen, Yijie Chen, and Liang Shi, Huawei Technologies Co., Ltd.; Yaodong Yang, Peking University; Peter Pietzuch, Imperial College London; Lei Chen, Hong Kong University of Science and Technology
A wide range of reinforcement learning (RL) algorithms have been proposed, in which agents learn from interactions with a simulated environment. Executing such RL training loops is computationally expensive, but current RL systems fail to support the training loops of different RL algorithms efficiently on GPU clusters: they either hard-code algorithm-specific strategies for parallelization and distribution; or they accelerate only parts of the computation on GPUs (e.g., DNN policy updates). We observe that current systems lack an abstraction that decouples the definition of an RL algorithm from its strategy for distributed execution.
We describe MSRL, a distributed RL training system that uses the new abstraction of a fragmented dataflow graph (FDG) to execute RL algorithms in a flexible way. An FDG is a heterogenous dataflow representation of an RL algorithm, which maps functions from the RL training loop to independent parallel dataflow fragments. Fragments account for the diverse nature of RL algorithms: each fragment can execute on a different device through a low-level dataflow implementation, e.g., an operator graph of a DNN engine, a CUDA GPU kernel, or a multi-threaded CPU process. At deployment time, a distribution policy governs how fragments are mapped to devices, without requiring changes to the RL algorithm implementation. Our experiments show that MSRL exposes trade-offs between different execution strategies, while surpassing the performance of existing RL systems with fixed execution strategies.
USENIX ATC '23 Open Access Sponsored by
King Abdullah University of Science and Technology (KAUST)
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.
This content is available to:
author = {Huanzhou Zhu and Bo Zhao and Gang Chen and Weifeng Chen and Yijie Chen and Liang Shi and Yaodong Yang and Peter Pietzuch and Lei Chen},
title = {{MSRL}: Distributed Reinforcement Learning with Dataflow Fragments},
booktitle = {2023 USENIX Annual Technical Conference (USENIX ATC 23)},
year = {2023},
isbn = {978-1-939133-35-9},
address = {Boston, MA},
pages = {977--993},
url = {https://www.usenix.org/conference/atc23/presentation/zhu-huanzhou},
publisher = {USENIX Association},
month = jul
}