Online Convex Optimization Using Coordinate Descent Algorithms (2201.10017v2)
Abstract: This paper considers the problem of online optimization where the objective function is time-varying. In particular, we extend coordinate descent type algorithms to the online case, where the objective function varies after a finite number of iterations of the algorithm. Instead of solving the problem exactly at each time step, we only apply a finite number of iterations at each time step. Commonly used notions of regret are used to measure the performance of the online algorithm. Moreover, coordinate descent algorithms with different updating rules are considered, including both deterministic and stochastic rules that are developed in the literature of classical offline optimization. A thorough regret analysis is given for each case. Finally, numerical simulations are provided to illustrate the theoretical results.
- On the convergence of block coordinate descent type methods. SIAM journal on Optimization, 23(4):2037–2060, 2013.
- Tracking moving agents via inexact online gradient descent algorithm. IEEE Journal of Selected Topics in Signal Processing, 12(1):202–217, 2018.
- Parallel and Distributed Computation: Numerical Methods. Prentice-Hall, 1989.
- Dimitri P Bertsekas. Convex Optimization Algorithms. Athena Scientific, 2015.
- Decentralized online convex optimization with feedback delays. IEEE Transactions on Automatic Control, 67(6):2889–2904, 2022.
- Online convex optimization with time-varying constraints and bandit feedback. IEEE Transactions on Automatic Control, 64(7):2665–2680, 2019.
- Online stochastic optimization with time-varying distributions. IEEE Transactions on Automatic Control, 66(4):1840–1847, 2021.
- On online optimization: Dynamic regret analysis of strongly convex and smooth problems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 6966–6973, 2021.
- An online convex optimization approach to proactive network resource allocation. IEEE Transactions on Signal Processing, 65(24):6350–6364, 2017.
- A linearly convergent variant of the conditional gradient algorithm under strong convexity, with applications to online and stochastic optimization. SIAM Journal on Optimization, 26(3):1493–1528, 2016.
- Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169–192, 2007.
- Online distributed convex optimization on dynamic networks. IEEE Transactions on Automatic Control, 61(11):3545–3550, 2016.
- Adaptive algorithms for online convex optimization with long-term constraints. In International Conference on Machine Learning, pages 402–411, 2016.
- A. Lesage-Landry and J. A. Taylor. Setpoint tracking with partially observed loads. IEEE Transactions on Power Systems, 33(5):5615–5627, 2018.
- Predictive online convex optimization. Automatica, 113:108771, 2020.
- Second-order online nonconvex optimization. IEEE Transactions on Automatic Control, 2020.
- A survey of decentralized online learning. arXiv preprint arXiv:2205.00473, 2022.
- Online optimization with predictions and switching costs: Fast algorithms and the fundamental limit. IEEE Transactions on Automatic Control, 66(10):4761–4768, 2021.
- Distributed adaptive convex optimization on directed graphs via continuous-time algorithms. IEEE Transactions on Automatic Control, 63(5):1434–1441, 2018.
- Asynchronous distributed optimization via dual decomposition and block coordinate ascent. In 58th IEEE Conference on Decision and Control, pages 6380–6385, 2019.
- Asynchronous distributed optimization via dual decomposition and block coordinate subgradient methods. IEEE Transactions On Control Of Network Systems, 8(3):1348–1359, 2021.
- First order online optimisation using forward gradients under polyak-Lojasiewicz condition. arXiv preprint arXiv:2211.15825, 2022.
- Online optimization in dynamic environments: Improved regret rates for strongly convex problems. In 55th IEEE Conference on Decision and Control, pages 7195–7201, 2016.
- Yu Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341–362, 2012.
- Coordinate descent converges faster with the gauss-southwell rule than random selection. In International Conference on Machine Learning, pages 1632–1641. PMLR, 2015.
- Randomized gradient-free distributed online optimization via a dynamic regret analysis. IEEE Transactions on Automatic Control, 2023.
- Online learning of feasible strategies in unknown environments. IEEE Transactions on Automatic Control, 62(6):2807–2822, 2017.
- Distributed online optimization in dynamic environments using mirror descent. IEEE Transactions on Automatic Control, 63(3):714–725, 2018.
- S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2012.
- Online optimization using zeroth order oracles. IEEE Control Systems Letters, 4(1):31–36, 2019.
- S. J. Wright. Coordinate descent algorithms. Mathematical Programming, 151(1):3–34, 2015.
- Distributed online optimization in time-varying unbalanced networks without explicit subgradients. IEEE Transactions on Signal Processing, 70:4047–4060, 2022.
- Distributed autonomous online learning: Regrets and intrinsic privacy-preserving properties. IEEE Transactions on Knowledge and Data Engineering, 25(11):2483–2493, 2013.
- Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme. Control Theory and Technology, 13:333–347, 2015.
- Distributed online convex optimization with time-varying coupled inequality constraints. IEEE Transactions on Signal Processing, 68:731–746, 2020.
- Distributed bandit online convex optimization with time-varying coupled inequality constraints. IEEE Transactions on Automatic Control, 66:4620–4635, 2021.
- Regret and cumulative constraint violation analysis for distributed online constrained convex optimization. IEEE Transactions on Automatic Control, 68:2875–2890, 2023.
- Online convex optimization with stochastic constraints. In Advances in Neural Information Processing Systems, pages 1428–1438, 2017.
- Distributed online optimization with long-term constraints. IEEE Transactions on Automatic Control, 67(3):1089–1104, 2022.
- Improved dynamic regret for non-degenerate functions. Advances in Neural Information Processing Systems, 30, 2017.
- Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th international conference on machine learning, pages 928–936, 2003.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.