Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism (1903.06681v1)

Published 15 Mar 2019 in cs.DC and cs.LG

Abstract: Scaling CNN training is necessary to keep up with growing datasets and reduce training time. We also see an emerging need to handle datasets with very large samples, where memory requirements for training are large. Existing training frameworks use a data-parallel approach that partitions samples within a mini-batch, but limits to scaling the mini-batch size and memory consumption makes this untenable for large samples. We describe and implement new approaches to convolution, which parallelize using spatial decomposition or a combination of sample and spatial decomposition. This introduces many performance knobs for a network, so we develop a performance model for CNNs and present a method for using it to automatically determine efficient parallelization strategies. We evaluate our algorithms with microbenchmarks and image classification with ResNet-50. Our algorithms allow us to prototype a model for a mesh-tangling dataset, where sample sizes are very large. We show that our parallelization achieves excellent strong and weak scaling and enables training for previously unreachable datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nikoli Dryden (21 papers)
  2. Naoya Maruyama (3 papers)
  3. Tom Benson (2 papers)
  4. Tim Moon (5 papers)
  5. Marc Snir (8 papers)
  6. Brian Van Essen (9 papers)
Citations (49)

Summary

We haven't generated a summary for this paper yet.