Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GTAdam: Gradient Tracking with Adaptive Momentum for Distributed Online Optimization (2009.01745v3)

Published 3 Sep 2020 in math.OC, cs.DC, and cs.LG

Abstract: This paper deals with a network of computing agents aiming to solve an online optimization problem in a distributed fashion, i.e., by means of local computation and communication, without any central coordinator. We propose the gradient tracking with adaptive momentum estimation (GTAdam) distributed algorithm, which combines a gradient tracking mechanism with first and second order momentum estimates of the gradient. The algorithm is analyzed in the online setting for strongly convex cost functions with Lipschitz continuous gradients. We provide an upper bound for the dynamic regret given by a term related to the initial conditions and another term related to the temporal variations of the objective functions. Moreover, a linear convergence rate is guaranteed in the static setup. The algorithm is tested on a time-varying classification problem, on a (moving) target localization problem, and in a stochastic optimization setup from image classification. In these numerical experiments from multi-agent learning, GTAdam outperforms state-of-the-art distributed optimization methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Guido Carnevale (18 papers)
  2. Francesco Farina (18 papers)
  3. Ivano Notarnicola (28 papers)
  4. Giuseppe Notarstefano (80 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.