Extend the log log p PoA lower bound from gradient-oracle to sample-complexity models

Determine whether the Ω(log log p) lower bound on the price of adaptivity for constant-probability suboptimality with uncertainty p in the initial distance to the optimum, established for stochastic first-order algorithms that compute one gradient per sample, also holds for general stochastic optimization algorithms with unrestricted access to each sample function (i.e., in the sample complexity model).

Background

Theorem 2 proves an Ω(log log p) PoA lower bound for constant-probability suboptimality via a reduction to noisy binary search, but the lower bound applies to stochastic first-order algorithms that observe one gradient per sample.

Strengthening this result to algorithms with complete access to sample functions is challenging because, in the constructed instances, observing certain sample function values can immediately reveal the optimum. It remains unresolved whether the same Ω(log log p) hardness persists in the broader sample-complexity setting.

References

Whether the log log p PoA lower bound also holds for sample complexity remains an open problem.

The Price of Adaptivity in Stochastic Convex Optimization (2402.10898 - Carmon et al., 16 Feb 2024) in Section 5 (Discussion), Sample complexity vs. gradient oracle complexity.