General Holder Smooth Convergence Rates Follow From Specialized Rates Assuming Growth Bounds (2104.10196v2)
Abstract: Often in the analysis of first-order methods for both smooth and nonsmooth optimization, assuming the existence of a growth/error bound or KL condition facilitates much stronger convergence analysis. Hence separate analysis is typically needed for the general case and for the growth bounded cases. We give meta-theorems for deriving general convergence rates from those assuming a growth lower bound. Applying this simple but conceptually powerful tool to the proximal point, subgradient, bundle, dual averaging, gradient descent, Frank-Wolfe, and universal accelerated methods immediately recovers their known convergence rates for general convex optimization problems from their specialized rates. New convergence results follow for bundle methods, dual averaging, and Frank-Wolfe. Our results can lift any rate based on H\"older continuous gradients and H\"older growth bounds. Moreover, our theory provides simple proofs of optimal convergence lower bounds under H\"older growth from textbook examples without growth bounds.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.