A Continuous-Time Nesterov Accelerated Gradient Method for Centralized and Distributed Online Convex Optimization (2009.12545v1)
Abstract: This paper studies the online convex optimization problem by using an Online Continuous-Time Nesterov Accelerated Gradient method (OCT-NAG). We show that the continuous-time dynamics generated by the online version of the Bregman Lagrangian achieves a constant static regret $\frac{c}{\sigma}$ independent of $T$, provided that some boundedness assumptions on the objective functions and optimal solutions hold. To the best of the authors' knowledge, this is the lowest static regret in the literature (lower than $O(\text{log}(T))$). We further show that under the same assumptions, the dynamic regret of the algorithm is $O(T)$, which is comparable with the existing methods. Simulation results validate the effectiveness and efficiency of the method. Furthermore, the simulation shows that the algorithm performs well in terms of the dynamic regret for some specific scaling conditions. In addition, we consider the application of the proposed online optimization method in distributed online optimization problems, and show that the proposed algorithm achieves an $O(\sqrt{T})$ static regret, which is comparable with the existing distributed online optimization methods. Different from these methods, the proposed method requires neither the gradient boundedness assumption nor the compact constraint set assumption, which allows different objective functions and different optimization problems with those in the literature. A comparable dynamic regret is obtained. Simulation results show the effectiveness and efficiency of the distributed algorithm.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.