Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed stochastic gradient tracking algorithm with variance reduction for non-convex optimization (2106.14479v2)

Published 28 Jun 2021 in math.OC and cs.DC

Abstract: This paper proposes a distributed stochastic algorithm with variance reduction for general smooth non-convex finite-sum optimization, which has wide applications in signal processing and machine learning communities. In distributed setting, large number of samples are allocated to multiple agents in the network. Each agent computes local stochastic gradient and communicates with its neighbors to seek for the global optimum. In this paper, we develop a modified variance reduction technique to deal with the variance introduced by stochastic gradients. Combining gradient tracking and variance reduction techniques, this paper proposes a distributed stochastic algorithm, GT-VR, to solve large-scale non-convex finite-sum optimization over multi-agent networks. A complete and rigorous proof shows that the GT-VR algorithm converges to first-order stationary points with $O(\frac{1}{k})$ convergence rate. In addition, we provide the complexity analysis of the proposed algorithm. Compared with some existing first-order methods, the proposed algorithm has a lower $\mathcal{O}(PM\epsilon{-1})$ gradient complexity under some mild condition. By comparing state-of-the-art algorithms and GT-VR in experimental simulations, we verify the efficiency of the proposed algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xia Jiang (18 papers)
  2. Xianlin Zeng (25 papers)
  3. Jian Sun (415 papers)
  4. Jie Chen (602 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.