Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Stochastic Non-Convex Optimization: Momentum-Based Variance Reduction (2005.00224v1)

Published 1 May 2020 in math.OC and cs.DC

Abstract: In this work, we propose a distributed algorithm for stochastic non-convex optimization. We consider a worker-server architecture where a set of $K$ worker nodes (WNs) in collaboration with a server node (SN) jointly aim to minimize a global, potentially non-convex objective function. The objective function is assumed to be the sum of local objective functions available at each WN, with each node having access to only the stochastic samples of its local objective function. In contrast to the existing approaches, we employ a momentum based "single loop" distributed algorithm which eliminates the need of computing large batch size gradients to achieve variance reduction. We propose two algorithms one with "adaptive" and the other with "non-adaptive" learning rates. We show that the proposed algorithms achieve the optimal computational complexity while attaining linear speedup with the number of WNs. Specifically, the algorithms reach an $\epsilon$-stationary point $x_a$ with $\mathbb{E}| \nabla f(x_a) | \leq \tilde{O}(K{-1/3}T{-1/2} + K{-1/3}T{-1/3})$ in $T$ iterations, thereby requiring $\tilde{O}(K{-1} \epsilon{-3})$ gradient computations at each WN. Moreover, our approach does not assume identical data distributions across WNs making the approach general enough for federated learning applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Prashant Khanduri (29 papers)
  2. Pranay Sharma (26 papers)
  3. Swatantra Kafle (5 papers)
  4. Saikiran Bulusu (7 papers)
  5. Ketan Rajawat (52 papers)
  6. Pramod K. Varshney (135 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.