Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Optimal Decentralized Composite Optimization for Convex Functions (2312.15845v3)

Published 26 Dec 2023 in math.OC

Abstract: In this paper, we focus on the decentralized composite optimization for convex functions. Because of advantages such as robust to the network and no communication bottle-neck in the central server, the decentralized optimization has attracted much research attention in signal processing, control, and optimization communities. Many optimal algorithms have been proposed for the objective function is smooth and (strongly)-convex in the past years. However, it is still an open question whether one can design an optimal algorithm when there is a non-smooth regularization term. In this paper, we fill the gap between smooth decentralized optimization and decentralized composite optimization and propose the first algorithm which can achieve both the optimal computation and communication complexities. Our experiments also validate the effectiveness and efficiency of our algorithm both in computation and communication.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Decentralized proximal gradient algorithms with linear convergence rates. IEEE Transactions on Automatic Control, 66(6), 2787–2794.
  2. A linearly convergent proximal gradient algorithm for decentralized optimization. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada (pp. 2844–2854).
  3. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning, 3(1), 1–122.
  4. Next: In-network nonconvex optimization. IEEE Transactions on Signal and Information Processing over Networks, 2(2), 120–136.
  5. Decentralized optimal dispatch of distributed energy resources. In 2012 IEEE 51st IEEE conference on decision and control (CDC) (pp. 3688–3693).: IEEE.
  6. Accelerating variance-reduced stochastic gradient methods. Mathematical Programming, (pp. 1–45).
  7. Optimal and practical algorithms for smooth and strongly convex decentralized optimization. Advances in Neural Information Processing Systems, 33, 18342–18352.
  8. Revisiting extra for smooth distributed optimization. SIAM Journal on Optimization, 30(3), 1795–1821.
  9. Accelerated gradient tracking over time-varying graphs for decentralized optimization. arXiv preprint arXiv:2104.02596.
  10. A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates. IEEE Transactions on Signal Processing, 67(17), 4494–4506.
  11. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. Advances in neural information processing systems, 30.
  12. Accelerated linear iterations for distributed averaging. Annual Reviews in Control, 35(2), 160–165.
  13. A decentralized second-order method with exact linear convergence rate for consensus optimization. IEEE Trans. Signal Inf. Process. over Networks, 2(4), 507–522.
  14. Nesterov, Y. (2003). Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media.
  15. Sarah: A novel method for machine learning problems using stochastic recursive gradient. In International conference on machine learning (pp. 2613–2621).: PMLR.
  16. Harnessing smoothness to accelerate distributed optimization. IEEE Transactions on Control of Network Systems, 5(3), 1245–1260.
  17. Accelerated distributed nesterov gradient descent. IEEE Transactions on Automatic Control, 65(6), 2566–2581.
  18. Sayed, A. H. et al. (2014). Adaptation, learning, and optimization over networks. Foundations and Trends® in Machine Learning, 7(4-5), 311–801.
  19. Optimal algorithms for smooth and strongly convex distributed optimization in networks. In international conference on machine learning (pp. 3027–3036).: PMLR.
  20. Extra: An exact first-order algorithm for decentralized consensus optimization. SIAM Journal on Optimization, 25(2), 944–966.
  21. A proximal gradient algorithm for decentralized composite optimization. IEEE Trans. Signal Process., 63(22), 6013–6023.
  22. Optimal gradient tracking for decentralized optimization. Mathematical Programming, (pp. 1–53).
  23. Distributed optimization based on gradient tracking revisited: Enhancing convergence rate via surrogation. SIAM Journal on Optimization, 32(2), 354–385.
  24. Genome-wide association analysis by lasso penalized logistic regression. Bioinformatics, 25(6), 714–721.
  25. Distributed algorithms for composite optimization: Unified framework and convergence analysis. IEEE Transactions on Signal Processing, 69, 3555–3570.
  26. Multi-consensus decentralized accelerated gradient descent. Journal of Machine Learning Research, 24(306), 1–50.
  27. Exact diffusion for distributed optimization and learning - part I: algorithm development. IEEE Trans. Signal Process., 67(3), 708–723.
  28. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B: Statistical Methodology, 67(2), 301–320.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: