Characterize the uniform-high-probability price of adaptivity under distance uncertainty

Characterize the exact price of adaptivity for 1−δ high-probability suboptimality guarantees that must hold uniformly for all δ ∈ (0,1) in non-smooth stochastic convex optimization with L-Lipschitz sample functions when the initial distance to the optimum is unknown up to a factor p (i.e., R ∈ [1,p]). Establish tight upper and lower bounds on this uniform-in-δ price of adaptivity as a function of p.

Background

The paper proves that the expected-error price of adaptivity (PoA) must be at least logarithmic in p when the Lipschitz constant is known and the distance to the optimizer is uncertain (Theorem 1), while existing high-probability guarantees (e.g., Carmon and Hinder [6]) achieve only double-logarithmic dependence on p but require a prespecified confidence level δ.

Because a uniform-in-δ high-probability bound would integrate to an expectation bound, any such uniform guarantee cannot have sub-logarithmic dependence on p without contradicting the expected-error lower bound. Consequently, the best possible uniform-in-δ PoA must be at least logarithmic in p, but its precise characterization remains unresolved.

References

Thus, the best possible probability 1 - 8. PoA bound holding uniformly for all & must be logarithmic in p; characterizing the uniform-high-probability PoA is an open problem.

The Price of Adaptivity in Stochastic Convex Optimization (2402.10898 - Carmon et al., 16 Feb 2024) in Section 5 (Discussion), PoA in high-probability is lower (!) than in expectation.