Extension of variance-reduced optimality to constrained settings (X ≠ R^d)
Establish whether variance-reduced stochastic optimization schemes can be adapted to constrained stochastic convex optimization with a nontrivial convex constraint set X ≠ R^d so as to achieve the same type of non-asymptotic local minimax lower bounds that are known to be attainable in the unconstrained case X = R^d.
References
Our work leaves open a number of open questions. For instance, there is a significant body of work which construct a non-asymptotic local minimax lower bound by comparing the given problem with its hardest alternative in a local shrinking neighborhood of the given problem. Indeed, in absence of the constraint set (i.e. $X = \mathbb{R}d$), variance reduced schemes --- the ones similar to the one studied in our paper --- achieve such non-asymptotic local minimax lower bound. It would be interesting to see if such results can be extended to our setting where $X \neq \mathbb{R}d$.