2000 character limit reached
ASAGA: Asynchronous Parallel SAGA (1606.04809v3)
Published 15 Jun 2016 in math.OC, cs.LG, and stat.ML
Abstract: We describe ASAGA, an asynchronous parallel version of the incremental gradient algorithm SAGA that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced "perturbed iterate" framework that resolves it. We thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead.
- Rémi Leblond (10 papers)
- Fabian Pedregosa (48 papers)
- Simon Lacoste-Julien (95 papers)