Efficient randomized mirror descents in stochastic online convex optimization
Abstract: In the paper we consider an application of mirror descent (dual averaging) to the stochastic online convex optimization problems. We compare classical mirror descent (Nemirovski-Yudin, 1979) with dual averaging (Nesterov, 2005) and Grigoriadis-Khachiyan algorithm (1995). Grigoriadis-Khachiyan algorithm has just proved to be a randomized mirror descent (dual averaging) with randomization in KL-projection of (sub)gradient to a unit simplex. We found out that this randomization is an optimal way of solving sparse matrix games and some other problems arising in convex optimization and experts weighting.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.