Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Fractional Bayesian Learning for Adaptive Optimization (2404.11354v1)

Published 17 Apr 2024 in math.OC, cs.DC, cs.LG, and cs.MA

Abstract: This paper considers a distributed adaptive optimization problem, where all agents only have access to their local cost functions with a common unknown parameter, whereas they mean to collaboratively estimate the true parameter and find the optimal solution over a connected network. A general mathematical framework for such a problem has not been studied yet. We aim to provide valuable insights for addressing parameter uncertainty in distributed optimization problems and simultaneously find the optimal solution. Thus, we propose a novel Prediction while Optimization scheme, which utilizes distributed fractional Bayesian learning through weighted averaging on the log-beliefs to update the beliefs of unknown parameters, and distributed gradient descent for renewing the estimation of the optimal solution. Then under suitable assumptions, we prove that all agents' beliefs and decision variables converge almost surely to the true parameter and the optimal solution under the true parameter, respectively. We further establish a sublinear convergence rate for the belief sequence. Finally, numerical experiments are implemented to corroborate the theoretical analysis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Angelia Nedić. Distributed gradient methods for convex machine learning problems in networks: Distributed optimization. IEEE Signal Processing Magazine, 37(3):92–101, 2020.
  2. Distributed coupled multiagent stochastic optimization. IEEE Transactions on Automatic Control, 65(1):175–190, 2020.
  3. Unfreezing the robot: Navigation in dense, interacting crowds. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 797–803. IEEE, 2010.
  4. Intention communication and hypothesis likelihood in game-theoretic motion planning. IEEE Robotics and Automation Letters, 8(3):1223–1230, 2023.
  5. Optimization methods in finance, volume 5. Cambridge University Press, 2006.
  6. Adaptive sequential stochastic optimization. IEEE Transactions on Automatic Control, 64(2):496–509, 2018.
  7. Exploiting problem structure in optimization under uncertainty via online convex optimization. Mathematical Programming, 177(1-2):113–147, March 2018.
  8. Lucidgames: Online unscented inverse dynamic games for adaptive trajectory prediction and planning. IEEE Robotics and Automation Letters, 6(3):5485–5492, 2021.
  9. Bayesian intention inference for trajectory prediction with an unknown goal destination. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5817–5823. IEEE, 2015.
  10. On the solution of stochastic optimization and variational problems in imperfect information regimes. SIAM Journal on Optimization, 26(4):2394–2429, 2016.
  11. On the analysis of inexact augmented lagrangian schemes for misspecified conic convex programs. IEEE Transactions on Automatic Control, 67(8):3981–3996, 2021.
  12. On the resolution of misspecified convex optimization and monotone variational inequality problems. Computational Optimization and Applications, 77(1):125–161, 2020.
  13. Distributed stochastic optimization under imperfect information. 2015 54th IEEE Conference on Decision and Control (CDC), pages 400–405, 2015.
  14. Distributed personalized gradient tracking with convex parametric models. IEEE Transactions on Automatic Control, 68(1):588–595, 2023.
  15. Stochastic adaptive linear quadratic differential games. arXiv preprint arXiv:2204.08869, 2022.
  16. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 54(1):48–61, 2009.
  17. On distributed convex optimization under inequality and equality constraints. IEEE Transactions on Automatic Control, 57(1):151–164, 2011.
  18. Distributed optimization for smart cyber-physical networks. Foundations and Trends in Systems and Control, 7(3):253–383, 2020.
  19. Distributed non-bayesian learning for games with incomplete information. arXiv preprint arXiv:2303.07212, 2023.
  20. Multi-agent bayesian learning with adaptive strategies: Convergence and stability. arXiv preprint arXiv:2010.09128, 2020.
  21. A general framework for updating belief distributions. Journal of the Royal Statistical Society Series B: Statistical Methodology, 78(5):1103–1130, 2016.
  22. Foundations of non-bayesian social learning. Columbia Business School Research Paper, 2017.
  23. Non-bayesian learning. The BE Journal of Theoretical Economics, 10(1):0000102202193517041623, 2010.
  24. Bayesian fractional posteriors. The Annals of Statistics, 1(2):209–230, 2019.
  25. Social learning and distributed hypothesis testing. IEEE Transactions on Information Theory, 64(9):6161–6179, 2018.
  26. A sharp estimate on the transient time of distributed stochastic gradient descent. IEEE Transactions on Automatic Control, 67(11):5900–5915, 2021.
  27. A study on the power parameter in power prior bayesian analysis. The American Statistician, 77(1):12–19, 2023.
  28. Peter Grünwald. The safe bayesian: learning the learning rate via the mixability gap. International Conference on Algorithmic Learning Theory, pages 169–183, 2012.
  29. A tutorial on distributed (non-bayesian) learning: Problem, algorithms and results. 2016 IEEE 55th Conference on Decision and Control (CDC), pages 6795–6801, 2016.
  30. Fast linear iterations for distributed averaging. Systems & Control Letters, 53(1):65–78, 2004.
  31. Bayesian learning in social networks. The Review of Economic Studies, 78(4):1201–1236, 2011.
  32. Constrained consensus and optimization in multi-agent networks. IEEE Transactions on Automatic Control, 55(4):922–938, 2010.
  33. Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013.
  34. Konrad Knopp. Theory and Application of infinite series. Courier Corporation, 1990.
  35. Yngvar Gotaas. A model of diffusion in a valley from a continuous point source. Archiv für Meteorologie, Geophysik und Bioklimatologie, Serie A, 21(1):13–26, 1972.
  36. Optimization methods for large-scale machine learning. SIAM review, 60(2):223–311, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yaqun Yang (2 papers)
  2. Jinlong Lei (31 papers)
  3. Guanghui Wen (32 papers)
  4. Yiguang Hong (87 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com