Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Communication-Efficient Adaptive Batch Size Strategies for Distributed Local Gradient Methods (2406.13936v2)

Published 20 Jun 2024 in stat.ML, cs.LG, and math.OC

Abstract: Modern deep neural networks often require distributed training with many workers due to their large size. As the number of workers increases, communication overheads become the main bottleneck in data-parallel minibatch stochastic gradient methods with per-iteration gradient synchronization. Local gradient methods like Local SGD reduce communication by only synchronizing model parameters and/or gradients after several local steps. Despite an understanding of their convergence and the importance of batch sizes for training efficiency and generalization, optimal batch sizes for local gradient methods are difficult to determine. We introduce adaptive batch size strategies for local gradient methods that increase batch sizes adaptively to reduce minibatch gradient variance. We provide convergence guarantees under homogeneous data conditions and support our claims with image classification and LLMing experiments, demonstrating the effectiveness of our strategies for both training efficiency and generalization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tim Tsz-Kit Lau (10 papers)
  2. Weijian Li (39 papers)
  3. Chenwei Xu (11 papers)
  4. Han Liu (340 papers)
  5. Mladen Kolar (80 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets