Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Privacy-Preserving Push-Pull Method for Decentralized Optimization via State Decomposition (2308.08164v1)

Published 16 Aug 2023 in eess.SY and cs.SY

Abstract: Distributed optimization is manifesting great potential in multiple fields, e.g., machine learning, control, and resource allocation. Existing decentralized optimization algorithms require sharing explicit state information among the agents, which raises the risk of private information leakage. To ensure privacy security, combining information security mechanisms, such as differential privacy and homomorphic encryption, with traditional decentralized optimization algorithms is a commonly used means. However, this would either sacrifice optimization accuracy or incur heavy computational burden. To overcome these shortcomings, we develop a novel privacy-preserving decentralized optimization algorithm, called PPSD, that combines gradient tracking with a state decomposition mechanism. Specifically, each agent decomposes its state associated with the gradient into two substates. One substate is used for interaction with neighboring agents, and the other substate containing private information acts only on the first substate and thus is entirely agnostic to other agents. For the strongly convex and smooth objective functions, PPSD attains a $R$-linear convergence rate. Moreover, the algorithm can preserve the agents' private information from being leaked to honest-but-curious neighbors. Simulations further confirm the results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Huqiang Cheng (4 papers)
  2. Xiaofeng Liao (9 papers)
  3. Huaqing Li (11 papers)
  4. You Zhao (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.