Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The large learning rate phase of deep learning: the catapult mechanism (2003.02218v1)

Published 4 Mar 2020 in stat.ML and cs.LG

Abstract: The choice of initial learning rate can have a profound effect on the performance of deep networks. We present a class of neural networks with solvable training dynamics, and confirm their predictions empirically in practical deep learning settings. The networks exhibit sharply distinct behaviors at small and large learning rates. The two regimes are separated by a phase transition. In the small learning rate phase, training can be understood using the existing theory of infinitely wide neural networks. At large learning rates the model captures qualitatively distinct phenomena, including the convergence of gradient descent dynamics to flatter minima. One key prediction of our model is a narrow range of large, stable learning rates. We find good agreement between our model's predictions and training dynamics in realistic deep learning settings. Furthermore, we find that the optimal performance in such settings is often found in the large learning rate phase. We believe our results shed light on characteristics of models trained at different learning rates. In particular, they fill a gap between existing wide neural network theory, and the nonlinear, large learning rate, training dynamics relevant to practice.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Aitor Lewkowycz (31 papers)
  2. Yasaman Bahri (20 papers)
  3. Ethan Dyer (32 papers)
  4. Jascha Sohl-Dickstein (88 papers)
  5. Guy Gur-Ari (28 papers)
Citations (209)

Summary

  • The paper identifies the 'catapult phase', where high learning rates cause an initial loss increase followed by rapid convergence to flatter minima.
  • The paper supports its findings with robust empirical evidence across fully connected, convolutional, and residual networks, pinpointing critical learning rate thresholds.
  • The paper bridges infinite-width linear approximations and nonlinear training dynamics, offering actionable insights for optimal learning rate selection and enhanced model generalization.

The Large Learning Rate Phase of Deep Learning: The Catapult Mechanism

The paper, "The large learning rate phase of deep learning: the catapult mechanism," by Lewkowycz et al., presents a significant analysis of the training dynamics of neural networks under varying learning rate regimes. It introduces the concept of the "catapult mechanism" as a novel explanation for the behavior of deep networks when trained with large learning rates. This paper provides a detailed theoretical framework supported by empirical evidence that elucidates why deep networks trained at high learning rates tend to converge towards flatter minima, which is closely associated with improved generalization.

Detailed Analysis of Learning Rate Regimes

The authors delineate three distinct learning rate regimes in the context of gradient descent: the lazy phase, the catapult phase, and the divergent phase. The critical discovery of the paper is the identification and characterization of the catapult phase, which occurs at large learning rates between the lazy and divergent phases. This phase is marked by an initial increase in the training loss followed by rapid convergence to flatter minima as the training dynamics stabilize. It represents a departure from the linear approximations applicable in the lazy phase, where the infinitely wide neural network theory usually applies and confines learning rates to relatively low values to prevent divergence.

Empirical Confirmation

The paper provides robust empirical results across multiple neural network architectures, including fully connected, convolutional, and residual networks, which validate the theoretical predictions. Of particular note is the empirical determination of a critical learning rate threshold, characterized empirically to be much higher than standard expectations from infinite width linearized dynamics, especially for networks with ReLU activations.

Implications for Neural Network Training

The findings challenge the traditional understanding of learning rate selection in deep learning. By demonstrating that optimal learning rates often lie within the catapult phase, this research provides practitioners with a rationale for experimenting with larger learning rates during training. Additionally, it sheds light on the observed phenomenon where large learning rates induce a form of implicit regularization that leads to better generalization by navigating the loss landscape towards flatter regions.

Theoretical Contributions

From a theoretical standpoint, the work bridges a gap between existing neural tangents/kernel theory and nonlinear, large learning rate training dynamics. The authors extend the theoretical landscape by addressing the limitations of existing models that exclude effects arising from the finite width of networks. The analytics provided deliver insights into the stability of large learning rate regimes by incorporating the dynamics of curvature reduction intrinsic to the catapult effect.

Future Directions

The paper opens several avenues for future research. Among these are explorations into the implications of the catapult mechanism beyond simple architectures and the investigation of its effects with other common neural network optimizers, such as Adam or RMSprop. Moreover, the paper's consistency across various tasks reinforces the necessity of a generalized framework to further integrate these new dynamics into existing deep learning theoretical paradigms.

The insights from this paper offer potential advancements not only in understanding neural network behavior but also in innovating training methodologies, ultimately improving the efficacy and efficiency of deep learning systems. Deploying models trained with these high learning rate strategies could lead to breakthroughs in their practical applications, potentially improving robustness and adaptability in real-world scenarios.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com