Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks (1908.06477v2)

Published 18 Aug 2019 in cs.LG and stat.ML

Abstract: Learning Rate (LR) is an important hyper-parameter to tune for effective training of deep neural networks (DNNs). Even for the baseline of a constant learning rate, it is non-trivial to choose a good constant value for training a DNN. Dynamic learning rates involve multi-step tuning of LR values at various stages of the training process and offer high accuracy and fast convergence. However, they are much harder to tune. In this paper, we present a comprehensive study of 13 learning rate functions and their associated LR policies by examining their range parameters, step parameters, and value update parameters. We propose a set of metrics for evaluating and selecting LR policies, including the classification confidence, variance, cost, and robustness, and implement them in LRBench, an LR benchmarking system. LRBench can assist end-users and DNN developers to select good LR policies and avoid bad LR policies for training their DNNs. We tested LRBench on Caffe, an open source deep learning framework, to showcase the tuning optimization of LR policies. Evaluated through extensive experiments, we attempt to demystify the tuning of LR policies by identifying good LR policies with effective LR value ranges and step sizes for LR update schedules.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yanzhao Wu (38 papers)
  2. Ling Liu (132 papers)
  3. Juhyun Bae (4 papers)
  4. Ka-Ho Chow (31 papers)
  5. Arun Iyengar (14 papers)
  6. Calton Pu (21 papers)
  7. Wenqi Wei (55 papers)
  8. Lei Yu (234 papers)
  9. Qi Zhang (784 papers)
Citations (60)