Extend the log log p PoA lower bound from gradient-oracle to sample-complexity models
Determine whether the Ω(log log p) lower bound on the price of adaptivity for constant-probability suboptimality with uncertainty p in the initial distance to the optimum, established for stochastic first-order algorithms that compute one gradient per sample, also holds for general stochastic optimization algorithms with unrestricted access to each sample function (i.e., in the sample complexity model).
Sponsor
References
Whether the log log p PoA lower bound also holds for sample complexity remains an open problem.
— The Price of Adaptivity in Stochastic Convex Optimization
(2402.10898 - Carmon et al., 16 Feb 2024) in Section 5 (Discussion), Sample complexity vs. gradient oracle complexity.