Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Architectural Implications of Distributed Reinforcement Learning on CPU-GPU Systems (2012.04210v1)

Published 8 Dec 2020 in cs.LG and cs.AR

Abstract: With deep reinforcement learning (RL) methods achieving results that exceed human capabilities in games, robotics, and simulated environments, continued scaling of RL training is crucial to its deployment in solving complex real-world problems. However, improving the performance scalability and power efficiency of RL training through understanding the architectural implications of CPU-GPU systems remains an open problem. In this work we investigate and improve the performance and power efficiency of distributed RL training on CPU-GPU systems by approaching the problem not solely from the GPU microarchitecture perspective but following a holistic system-level analysis approach. We quantify the overall hardware utilization on a state-of-the-art distributed RL training framework and empirically identify the bottlenecks caused by GPU microarchitectural, algorithmic, and system-level design choices. We show that the GPU microarchitecture itself is well-balanced for state-of-the-art RL frameworks, but further investigation reveals that the number of actors running the environment interactions and the amount of hardware resources available to them are the primary performance and power efficiency limiters. To this end, we introduce a new system design metric, CPU/GPU ratio, and show how to find the optimal balance between CPU and GPU resources when designing scalable and efficient CPU-GPU systems for RL training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ahmet Inci (7 papers)
  2. Evgeny Bolotin (2 papers)
  3. Yaosheng Fu (4 papers)
  4. Gal Dalal (30 papers)
  5. Shie Mannor (228 papers)
  6. David Nellans (4 papers)
  7. Diana Marculescu (64 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.