Dice Question Streamline Icon: https://streamlinehq.com

Extension of variance-reduced optimality to constrained settings (X ≠ R^d)

Establish whether variance-reduced stochastic optimization schemes can be adapted to constrained stochastic convex optimization with a nontrivial convex constraint set X ≠ R^d so as to achieve the same type of non-asymptotic local minimax lower bounds that are known to be attainable in the unconstrained case X = R^d.

Information Square Streamline Icon: https://streamlinehq.com

Background

The discussion notes a body of work constructing non-asymptotic local minimax lower bounds via hardest local alternatives and observes that, in the unconstrained case (X = Rd), variance-reduced schemes can attain such bounds. The constrained setting (X ≠ Rd) introduces geometric complexities that may affect algorithm design and performance.

The open question is whether the favorable non-asymptotic optimality results known for unconstrained variance-reduced methods can be extended to constrained problems, preserving instance-dependent optimality under convex constraints.

References

Our work leaves open a number of open questions. For instance, there is a significant body of work which construct a non-asymptotic local minimax lower bound by comparing the given problem with its hardest alternative in a local shrinking neighborhood of the given problem. Indeed, in absence of the constraint set (i.e. $X = \mathbb{R}d$), variance reduced schemes --- the ones similar to the one studied in our paper --- achieve such non-asymptotic local minimax lower bound. It would be interesting to see if such results can be extended to our setting where $X \neq \mathbb{R}d$.