Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Loss Curvature Perspective on Training Instability in Deep Learning (2110.04369v1)

Published 8 Oct 2021 in cs.LG and cs.AI

Abstract: In this work, we study the evolution of the loss Hessian across many classification tasks in order to understand the effect the curvature of the loss has on the training dynamics. Whereas prior work has focused on how different learning rates affect the loss Hessian observed during training, we also analyze the effects of model initialization, architectural choices, and common training heuristics such as gradient clipping and learning rate warmup. Our results demonstrate that successful model and hyperparameter choices allow the early optimization trajectory to either avoid -- or navigate out of -- regions of high curvature and into flatter regions that tolerate a higher learning rate. Our results suggest a unifying perspective on how disparate mitigation strategies for training instability ultimately address the same underlying failure mode of neural network optimization, namely poor conditioning. Inspired by the conditioning perspective, we show that learning rate warmup can improve training stability just as much as batch normalization, layer normalization, MetaInit, GradInit, and Fixup initialization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Justin Gilmer (39 papers)
  2. Behrooz Ghorbani (18 papers)
  3. Ankush Garg (14 papers)
  4. Sneha Kudugunta (14 papers)
  5. Behnam Neyshabur (53 papers)
  6. David Cardoze (2 papers)
  7. George Dahl (4 papers)
  8. Zachary Nado (23 papers)
  9. Orhan Firat (80 papers)
Citations (34)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com