Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ASAGA: Asynchronous Parallel SAGA (1606.04809v3)

Published 15 Jun 2016 in math.OC, cs.LG, and stat.ML

Abstract: We describe ASAGA, an asynchronous parallel version of the incremental gradient algorithm SAGA that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced "perturbed iterate" framework that resolves it. We thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Rémi Leblond (10 papers)
  2. Fabian Pedregosa (48 papers)
  3. Simon Lacoste-Julien (95 papers)
Citations (99)

Summary

We haven't generated a summary for this paper yet.