Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ElegantRL-Podracer: Scalable and Elastic Library for Cloud-Native Deep Reinforcement Learning (2112.05923v2)

Published 11 Dec 2021 in cs.LG, cs.AI, and cs.DC

Abstract: Deep reinforcement learning (DRL) has revolutionized learning and actuation in applications such as game playing and robotic control. The cost of data collection, i.e., generating transitions from agent-environment interactions, remains a major challenge for wider DRL adoption in complex real-world problems. Following a cloud-native paradigm to train DRL agents on a GPU cloud platform is a promising solution. In this paper, we present a scalable and elastic library ElegantRL-podracer for cloud-native deep reinforcement learning, which efficiently supports millions of GPU cores to carry out massively parallel training at multiple levels. At a high-level, ElegantRL-podracer employs a tournament-based ensemble scheme to orchestrate the training process on hundreds or even thousands of GPUs, scheduling the interactions between a leaderboard and a training pool with hundreds of pods. At a low-level, each pod simulates agent-environment interactions in parallel by fully utilizing nearly 7,000 GPU CUDA cores in a single GPU. Our ElegantRL-podracer library features high scalability, elasticity and accessibility by following the development principles of containerization, microservices and MLOps. Using an NVIDIA DGX SuperPOD cloud, we conduct extensive experiments on various tasks in locomotion and stock trading and show that ElegantRL-podracer substantially outperforms RLlib. Our codes are available on GitHub.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xiao-Yang Liu (62 papers)
  2. Zechu Li (7 papers)
  3. Zhuoran Yang (155 papers)
  4. Jiahao Zheng (9 papers)
  5. Zhaoran Wang (164 papers)
  6. Anwar Walid (21 papers)
  7. Jian Guo (76 papers)
  8. Michael I. Jordan (438 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.