Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets (1202.6258v4)

Published 28 Feb 2012 in math.OC and cs.LG

Abstract: We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Nicolas Le Roux (41 papers)
  2. Mark Schmidt (74 papers)
  3. Francis Bach (249 papers)
Citations (104)

Summary

We haven't generated a summary for this paper yet.