Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Break-Even Point on Optimization Trajectories of Deep Neural Networks (2002.09572v1)

Published 21 Feb 2020 in cs.LG and stat.ML

Abstract: The early phase of training of deep neural networks is critical for their final performance. In this work, we study how the hyperparameters of stochastic gradient descent (SGD) used in the early phase of training affect the rest of the optimization trajectory. We argue for the existence of the "break-even" point on this trajectory, beyond which the curvature of the loss surface and noise in the gradient are implicitly regularized by SGD. In particular, we demonstrate on multiple classification tasks that using a large learning rate in the initial phase of training reduces the variance of the gradient, and improves the conditioning of the covariance of gradients. These effects are beneficial from the optimization perspective and become visible after the break-even point. Complementing prior work, we also show that using a low learning rate results in bad conditioning of the loss surface even for a neural network with batch normalization layers. In short, our work shows that key properties of the loss surface are strongly influenced by SGD in the early phase of training. We argue that studying the impact of the identified effects on generalization is a promising future direction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Maciej Szymczak (1 paper)
  2. Stanislav Fort (30 papers)
  3. Devansh Arpit (31 papers)
  4. Jacek Tabor (106 papers)
  5. Kyunghyun Cho (292 papers)
  6. Krzysztof Geras (4 papers)
  7. Stanislaw Jastrzebski (7 papers)
Citations (144)

Summary

We haven't generated a summary for this paper yet.