Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Numerical Methods for Distributed Stochastic Compositional Optimization Problems with Aggregative Structure (2211.04532v1)

Published 4 Nov 2022 in math.OC

Abstract: The paper studies the distributed stochastic compositional optimization problems over networks, where all the agents' inner-level function is the sum of each agent's private expectation function. Focusing on the aggregative structure of the inner-level function, we employ the hybrid variance reduction method to obtain the information on each agent's private expectation function, and apply the dynamic consensus mechanism to track the information on each agent's inner-level function. Then by combining with the standard distributed stochastic gradient descent method, we propose a distributed aggregative stochastic compositional gradient descent method. When the objective function is smooth, the proposed method achieves the optimal convergence rate $\mathcal{O}\left(K{-1/2}\right)$. We further combine the proposed method with the communication compression and propose the communication compressed variant distributed aggregative stochastic compositional gradient descent method. The compressed variant of the proposed method maintains the optimal convergence rate $\mathcal{O}\left(K{-1/2}\right)$. Simulated experiments on decentralized reinforcement learning verify the effectiveness of the proposed methods.

Citations (1)

Summary

We haven't generated a summary for this paper yet.