Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
117 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Minimizing Dynamic Regret on Geodesic Metric Spaces (2302.08652v2)

Published 17 Feb 2023 in cs.LG

Abstract: In this paper, we consider the sequential decision problem where the goal is to minimize the general dynamic regret on a complete Riemannian manifold. The task of offline optimization on such a domain, also known as a geodesic metric space, has recently received significant attention. The online setting has received significantly less attention, and it has remained an open question whether the body of results that hold in the Euclidean setting can be transplanted into the land of Riemannian manifolds where new challenges (e.g., curvature) come into play. In this paper, we show how to get optimistic regret bound on manifolds with non-positive curvature whenever improper learning is allowed and propose an array of adaptive no-regret algorithms. To the best of our knowledge, this is the first work that considers general dynamic regret and develops "optimistic" online learning algorithms which can be employed on geodesic metric spaces.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the 21st Annual Conference on Learning Theory, pages 415–423, 2008.
  2. From nesterov’s estimate sequence to riemannian acceleration. In Proceedings of the 33rd Annual Conference on Learning Theory, pages 84–118. PMLR, 2020.
  3. A continuous-time perspective for modeling acceleration in riemannian optimization. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, pages 1297–1307. PMLR, 2020.
  4. Optimal dynamic regret in exp-concave online learning. In Proceedings of 34th Conference on Learning Theory, pages 359–409. PMLR, 2021.
  5. Miroslav Bacák. Computing medians and means in hadamard spaces. SIAM journal on optimization, 24(3):1542–1566, 2014a.
  6. Miroslav Bacák. Convex analysis and optimization in Hadamard spaces. de Gruyter, 2014b.
  7. Werner Ballmann. Lectures on spaces of nonpositive curvature, volume 25. Birkhäuser, 2012.
  8. Riemannian adaptive optimization methods. In 7th International Conference on Learning Representations, 2019.
  9. Elements of convex geometry in hadamard manifolds with application to equilibrium problems. arXiv preprint arXiv:2107.02223, 2021.
  10. Non-stationary stochastic optimization. Operations research, 63(5):1227–1244, 2015.
  11. Rajendra Bhatia. Positive definite matrices. In Positive Definite Matrices. Princeton university press, 2009.
  12. Metric spaces of non-positive curvature, volume 319. Springer Science & Business Media, 2013.
  13. Prediction, learning, and games. Cambridge university press, 2006.
  14. Online optimization with gradual variations. In Proceedings of the 25th Annual Conference on Learning Theory, pages 6.1–6.20, 2012.
  15. Negative curvature obstructs acceleration for strongly geodesically convex optimization, even with exact first-order oracles. In Proceedings of the 35th Annual Conference on Learning Theory, pages 496–542. PMLR, 2022.
  16. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Transactions on Automatic Control, 57(3):592–606, 2012.
  17. Online improper learning with an approximation oracle. Advances in Neural Information Processing Systems, 31, 2018.
  18. Elad Hazan et al. Introduction to online convex optimization. Foundations and Trends® in Optimization, 2(3-4):157–325, 2016.
  19. Matrix manifold optimization for gaussian mixtures. In Advances in Neural Information Processing Systems 29, 28:910–918, 2015.
  20. Online optimization: Competing with dynamic comparators. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, pages 398–406. PMLR, 2015.
  21. Riemannian stochastic approximation algorithms. arXiv preprint arXiv:2206.06795, 2022.
  22. What do ‘convexities’ imply on hadamard manifolds? Journal of Optimization Theory and Applications, 170:1068–1074, 2016.
  23. Differentiating through the fréchet mean. In Proceeddings of the 37th International Conference on Machine Learning, pages 6393–6403. PMLR, 2020.
  24. Achieving all with no parameters: Adanormalhedge. In Proceedings of the 28th Annual Conference on Learning Theory, pages 1286–1304. PMLR, 2015.
  25. Tracking and regret bounds for online zeroth-order euclidean and riemannian optimization. SIAM Journal on Optimization, 32(2):445–469, 2022.
  26. David Martínez-Rubio. Global riemannian acceleration in hyperbolic and spherical spaces. In Proceedings of the 33rd International Conference on Algorithmic Learning Theory, pages 768–826. PMLR, 2022.
  27. Accelerated riemannian optimization: Handling constraints with a prox to bound geometric penalties. arXiv preprint arXiv:2211.14645, 2022.
  28. Online optimization in dynamic environments: Improved regret rates for strongly convex problems. In 2016 IEEE 55th Conference on Decision and Control, pages 7195–7201. IEEE, 2016.
  29. Peter Petersen. Riemannian geometry, volume 171. Springer, 2006.
  30. Online learning with predictable sequences. In Conference on Learning Theory, pages 993–1019. PMLR, 2013.
  31. Takashi Sakai. Riemannian geometry, volume 149. American Mathematical Soc., 1996.
  32. On geodesically convex formulations for the brascamp-lieb constant. In Proceedings of the 21st International Conference on Approximation Algorithms for Combinatorial Optimization Problems. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018.
  33. Smoothness, low noise and fast rates. In Advances in neural information processing systems 23, 2010.
  34. Karl-Theodor Sturm. Probability measures on metric spaces of nonpositive curvature. Heat Kernels and Analysis on Manifolds, Graphs, and Metric Spaces: Lecture Notes from a Quarter Program on Heat Kernels, Random Walks, and Analysis on Manifolds and Graphs, 338:357, 2003.
  35. Escaping from saddle points on riemannian manifolds. In In Advances in Neural Information Processing Systems 32, pages 7276–7286, 2019.
  36. Fast convergence of regularized learning in games. In Advances in Neural Information Processing Systems 28, pages 2989–2997, 2015.
  37. Constantin Udriste. Convex functions and optimization methods on Riemannian manifolds, volume 297. Springer Science & Business Media, 2013.
  38. Metagrad: Multiple learning rates in online learning. In In Advances in Neural Information Processing Systems 29, pages 3666–3674, 2016.
  39. Nisheeth K Vishnoi. Geodesic convex optimization: Differentiation on manifolds, geodesics, and convexity. arXiv preprint arXiv:1806.06373, 2018.
  40. Projection-free online learning in dynamic environments. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, pages 10067–10075, 2021.
  41. No-regret online learning over riemannian manifolds. In Advances in Neural Information Processing Systems 34, pages 28323–28335, 2021.
  42. First-order methods for geodesically convex optimization. In The 29th Annual Conference on Learning Theory, pages 1617–1638. PMLR, 2016.
  43. Improved dynamic regret for non-degenerate functions. In Advance in Neural Information Processing Systems 30, pages 732–741, 2017.
  44. Adaptive online learning in dynamic environments. In Advances in Neural Information Processing Systems, 31, 2018.
  45. Minimax in geodesic metric spaces: Sion’s theorem and algorithms. arXiv preprint arXiv:2202.06950, 2022.
  46. Improved analysis for dynamic regret of strongly convex and smooth functions. In Proceedings of the 3rd Conference on Learning for Dynamics and Control, pages 48–59, 2021.
  47. Dynamic regret of convex and smooth functions. In In Advances in Neural Information Processing Systems 33, pages 12510–12520, 2020.
  48. A revision on geodesic pseudo-convex combination and knaster–kuratowski–mazurkiewicz theorem on hadamard manifolds. Journal of Optimization Theory and Applications, 182(3):1186–1198, 2019.
  49. Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, pages 928–936, 2003.
Citations (5)

Summary

We haven't generated a summary for this paper yet.