Inexact Online Proximal Mirror Descent for time-varying composite optimization (2304.04710v1)
Abstract: In this paper, we consider the online proximal mirror descent for solving the time-varying composite optimization problems. For various applications, the algorithm naturally involves the errors in the gradient and proximal operator. We obtain sharp estimates on the dynamic regret of the algorithm when the regular part of the cost is convex and smooth. If the Bregman distance is given by the Euclidean distance, our result also improves the previous work in two ways: (i) We establish a sharper regret bound compared to the previous work in the sense that our estimate does not involve $O(T)$ term appearing in that work. (ii) We also obtain the result when the domain is the whole space $\mathbb{R}n$, whereas the previous work was obtained only for bounded domains. We also provide numerical tests for problems involving the errors in the gradient and proximal operator.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.