Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ROAM: memory-efficient large DNN training via optimized operator ordering and memory layout (2310.19295v1)

Published 30 Oct 2023 in cs.LG, cs.AI, and cs.DB

Abstract: As deep learning models continue to increase in size, the memory requirements for training have surged. While high-level techniques like offloading, recomputation, and compression can alleviate memory pressure, they also introduce overheads. However, a memory-efficient execution plan that includes a reasonable operator execution order and tensor memory layout can significantly increase the models' memory efficiency and reduce overheads from high-level techniques. In this paper, we propose ROAM which operates on computation graph level to derive memory-efficient execution plan with optimized operator order and tensor memory layout for models. We first propose sophisticated theories that carefully consider model structure and training memory load to support optimization for large complex graphs that have not been well supported in the past. An efficient tree-based algorithm is further proposed to search task divisions automatically, along with delivering high performance and effectiveness to solve the problem. Experiments show that ROAM achieves a substantial memory reduction of 35.7%, 13.3%, and 27.2% compared to Pytorch and two state-of-the-art methods and offers a remarkable 53.7x speedup. The evaluation conducted on the expansive GPT2-XL further validates ROAM's scalability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Huiyao Shu (1 paper)
  2. Ang Wang (13 papers)
  3. Ziji Shi (7 papers)
  4. Hanyu Zhao (23 papers)
  5. Yong Li (630 papers)
  6. Lu Lu (189 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.