Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs with Hybrid Parallelism (2007.12856v1)

Published 25 Jul 2020 in cs.DC and cs.LG

Abstract: We present scalable hybrid-parallel algorithms for training large-scale 3D convolutional neural networks. Deep learning-based emerging scientific workflows often require model training with large, high-dimensional samples, which can make training much more costly and even infeasible due to excessive memory usage. We solve these challenges by extensively applying hybrid parallelism throughout the end-to-end training pipeline, including both computations and I/O. Our hybrid-parallel algorithm extends the standard data parallelism with spatial parallelism, which partitions a single sample in the spatial domain, realizing strong scaling beyond the mini-batch dimension with a larger aggregated memory capacity. We evaluate our proposed training algorithms with two challenging 3D CNNs, CosmoFlow and 3D U-Net. Our comprehensive performance studies show that good weak and strong scaling can be achieved for both networks using up 2K GPUs. More importantly, we enable training of CosmoFlow with much larger samples than previously possible, realizing an order-of-magnitude improvement in prediction accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yosuke Oyama (4 papers)
  2. Naoya Maruyama (3 papers)
  3. Nikoli Dryden (21 papers)
  4. Erin McCarthy (2 papers)
  5. Peter Harrington (22 papers)
  6. Jan Balewski (16 papers)
  7. Satoshi Matsuoka (33 papers)
  8. Peter Nugent (58 papers)
  9. Brian Van Essen (9 papers)
Citations (36)