Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Adaptive Incremental Gradient Method With Support for Non-Euclidean Norms (2205.02273v2)

Published 28 Apr 2022 in math.OC and cs.LG

Abstract: Stochastic variance reduced methods have shown strong performance in solving finite-sum problems. However, these methods usually require the users to manually tune the step-size, which is time-consuming or even infeasible for some large-scale optimization tasks. To overcome the problem, we propose and analyze several novel adaptive variants of the popular SAGA algorithm. Eventually, we design a variant of Barzilai-Borwein step-size which is tailored for the incremental gradient method to ensure memory efficiency and fast convergence. We establish its convergence guarantees under general settings that allow non-Euclidean norms in the definition of smoothness and the composite objectives, which cover a broad range of applications in machine learning. We improve the analysis of SAGA to support non-Euclidean norms, which fills the void of existing work. Numerical experiments on standard datasets demonstrate a competitive performance of the proposed algorithm compared with existing variance-reduced methods and their adaptive variants.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Binghui Xie (8 papers)
  2. Chenhan Jin (3 papers)
  3. Kaiwen Zhou (42 papers)
  4. James Cheng (75 papers)
  5. Wei Meng (20 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.