Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Better Parameter-free Stochastic Optimization with ODE Updates for Coin-Betting (2006.07507v3)

Published 12 Jun 2020 in cs.LG and stat.ML

Abstract: Parameter-free stochastic gradient descent (PFSGD) algorithms do not require setting learning rates while achieving optimal theoretical performance. In practical applications, however, there remains an empirical gap between tuned stochastic gradient descent (SGD) and PFSGD. In this paper, we close the empirical gap with a new parameter-free algorithm based on continuous-time Coin-Betting on truncated models. The new update is derived through the solution of an Ordinary Differential Equation (ODE) and solved in a closed form. We show empirically that this new parameter-free algorithm outperforms algorithms with the "best default" learning rates and almost matches the performance of finely tuned baselines without anything to tune.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Keyi Chen (7 papers)
  2. John Langford (94 papers)
  3. Francesco Orabona (62 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.