Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent (1811.12941v1)

Published 30 Nov 2018 in cs.LG, cs.DC, and stat.ML

Abstract: Increasing the mini-batch size for stochastic gradient descent offers significant opportunities to reduce wall-clock training time, but there are a variety of theoretical and systems challenges that impede the widespread success of this technique. We investigate these issues, with an emphasis on time to convergence and total computational cost, through an extensive empirical analysis of network training across several architectures and problem domains, including image classification, image segmentation, and LLMing. Although it is common practice to increase the batch size in order to fully exploit available computational resources, we find a substantially more nuanced picture. Our main finding is that across a wide range of network architectures and problem domains, increasing the batch size beyond a certain point yields no decrease in wall-clock time to convergence for \emph{either} train or test loss. This batch size is usually substantially below the capacity of current systems. We show that popular training strategies for large batch size optimization begin to fail before we can populate all available compute resources, and we show that the point at which these methods break down depends more on attributes like model architecture and data complexity than it does directly on the size of the dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Noah Golmant (2 papers)
  2. Nikita Vemuri (3 papers)
  3. Zhewei Yao (64 papers)
  4. Vladimir Feinberg (8 papers)
  5. Amir Gholami (60 papers)
  6. Kai Rothauge (7 papers)
  7. Michael W. Mahoney (233 papers)
  8. Joseph Gonzalez (35 papers)
Citations (70)

Summary

We haven't generated a summary for this paper yet.