Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Learning Rates Improve Generalization: But How Large Are We Talking About? (2311.11303v1)

Published 19 Nov 2023 in cs.LG and stat.ML

Abstract: Inspired by recent research that recommends starting neural networks training with large learning rates (LRs) to achieve the best generalization, we explore this hypothesis in detail. Our study clarifies the initial LR ranges that provide optimal results for subsequent training with a small LR or weight averaging. We find that these ranges are in fact significantly narrower than generally assumed. We conduct our main experiments in a simplified setup that allows precise control of the learning rate hyperparameter and validate our key findings in a more practical setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ekaterina Lobacheva (17 papers)
  2. Eduard Pockonechnyy (1 paper)
  3. Maxim Kodryan (6 papers)
  4. Dmitry Vetrov (84 papers)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com