Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TensorOpt: Exploring the Tradeoffs in Distributed DNN Training with Auto-Parallelism (2004.10856v1)

Published 16 Apr 2020 in cs.DC, cs.LG, and stat.ML

Abstract: A good parallelization strategy can significantly improve the efficiency or reduce the cost for the distributed training of deep neural networks (DNNs). Recently, several methods have been proposed to find efficient parallelization strategies but they all optimize a single objective (e.g., execution time, memory consumption) and produce only one strategy. We propose FT, an efficient algorithm that searches for an optimal set of parallelization strategies to allow the trade-off among different objectives. FT can adapt to different scenarios by minimizing the memory consumption when the number of devices is limited and fully utilize additional resources to reduce the execution time. For popular DNN models (e.g., vision, language), an in-depth analysis is conducted to understand the trade-offs among different objectives and their influence on the parallelization strategies. We also develop a user-friendly system, called TensorOpt, which allows users to run their distributed DNN training jobs without caring the details of parallelization strategies. Experimental results show that FT runs efficiently and provides accurate estimation of runtime costs, and TensorOpt is more flexible in adapting to resource availability compared with existing frameworks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhenkun Cai (8 papers)
  2. Kaihao Ma (2 papers)
  3. Xiao Yan (49 papers)
  4. Yidi Wu (4 papers)
  5. Yuzhen Huang (15 papers)
  6. James Cheng (75 papers)
  7. Teng Su (5 papers)
  8. Fan Yu (63 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.