Papers
Topics
Authors
Recent
2000 character limit reached

Distributed Stochastic Optimization With Unbounded Subgradients Over Randomly Time-Varying Networks

Published 20 Aug 2020 in eess.SY and cs.SY | (2008.08796v14)

Abstract: Motivated by distributed statistical learning over uncertain communication networks, we study distributed stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. The network is modeled by a sequence of time-varying random digraphs with each node representing a local optimizer and each edge representing a communication link. We consider the distributed subgradient optimization algorithm with noisy measurements of local cost functions' subgradients, additive and multiplicative noises among information exchanging between each pair of nodes. By stochastic Lyapunov method, convex analysis, algebraic graph theory and martingale convergence theory, we prove that if the local subgradient functions grow linearly and the sequence of digraphs is conditionally balanced and uniformly conditionally jointly connected, then proper algorithm step sizes can be designed so that all nodes' states converge to the global optimal solution almost surely.

Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.