Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Time-Smoothed Gradients for Online Forecasting (1905.08850v1)

Published 21 May 2019 in cs.LG and stat.ML

Abstract: Here, we study different update rules in stochastic gradient descent (SGD) for online forecasting problems. The selection of the learning rate parameter is critical in SGD. However, it may not be feasible to tune this parameter in online learning. Therefore, it is necessary to have an update rule that is not sensitive to the selection of the learning parameter. Inspired by the local regret metric that we introduced previously, we propose to use time-smoothed gradients within SGD update. Using the public data set-- GEFCom2014, we validate that our approach yields more stable results than the other existing approaches. Furthermore, we show that such a simple approach is computationally efficient compared to the alternatives.

Summary

We haven't generated a summary for this paper yet.