Agnostic Universal Rates of ERM
- The paper establishes a trichotomy: exponential decay for finite H, super-root for infinite H with finite VC-dimension, and arbitrarily slow rates for infinite VC-dimension.
- It quantifies error decay purely based on hypothesis space size and VC-dimension, offering practical insights for optimizing empirical risk minimization.
- These universal rates refine classical PAC analysis by demonstrating how ERM learning curves adapt to inherent agnostic noise and model misspecification.
Agnostic universal rates of Empirical Risk Minimization (ERM) refer to the fundamental speed at which the expected excess classification risk of ERM decays with sample size for all possible data distributions, without assuming that the hypothesis class contains the true labeling function. Recent work, notably "Universal Rates of ERM for Agnostic Learning" (Hanneke & Xu, 2025), delivers a precise trichotomy of possible universal rates for ERM in the agnostic setting. These results clarify the behavior and limitations of ERM-based learning curves for broad concept classes and fixed target distributions, providing a complete combinatorial characterization and several structural refinements for finer, target-dependent rates.
1. Agnostic Universal Rate Trichotomy
ERM in the agnostic binary classification setting outputs a classifier by minimizing the sample mean of the $0$-$1$ error with respect to a class . The universal agnostic rate is defined via the decay rate of the expected excess risk:
as a function of the sample size , for every data distribution . The main result establishes a classification into exactly three rate regimes:
Universal Rate | Characterization |
---|---|
Exponential, | |
Super-root, | and |
Arbitrarily slow |
- If is finite, ERM achieves an exponentially fast rate in for every distribution.
- If is infinite but has finite VC-dimension, ERM achieves a super-root rate: for every , the expected excess risk converges faster than (and strictly slower than any exponential).
- If has infinite VC-dimension, there exist distributions for which, under ERM, the expected excess risk decays slower than any prescribed positive function tending to zero.
2. Formal Definitions and Mathematical Formulations
Let be i.i.d. samples from . ERM selects
The universal learning curve is the function , where the expectation is with respect to the sample, and "universal" means this bound holds for every fixed (but arbitrary) distribution .
The trichotomy states:
- If , for any :
for constants depending on and .
- If and :
(i.e., for every , as ).
- If , then for any pre-specified rate , there is a distribution such that
for infinitely many .
3. Combinatorial Characterization
The rate regime is dictated entirely by two class properties:
- Finite hypothesis space: Implies exponential rate.
- Infinite but VC-finite: Implies super-root rate, with the exact speed always strictly better than (as in classical uniform PAC rates), but never exponential.
- Infinite VC-dimension: Allows for distributions where the rate cannot be bounded by any prescribed function—there is no universal guarantee on the speed of error decay.
This classification is sharp and exhaustive for all .
4. Target- and Bayes-Dependent Universal Rates
The paper supplies further subdivisions into target-dependent and Bayes-dependent universal rates, refining the analysis to the structure around the best-in-class or Bayes-optimal function.
- Target-dependent trichotomy: For a given "target" , the achievable rate for distributions where is the best can be classified by properties of how the "near-best" competitors of are organized (specifically, whether their set has finite VC dimension or not).
- Bayes-dependent trichotomy: Considering the true Bayes classifier , the rates depend on whether there exists an infinite (VC-)eluder sequence centered at —a set witnessing sustainable disagreement among functions in .
These refinements provide a more finely grained view of distribution-dependent universal rates and clarify when ERM may adapt to "easy" or "hard" distributions for a fixed .
5. Contrast to the Realizable Universal Rates
In the realizable case (where is supported on some ), "Universal Rates of Empirical Risk Minimization" (2412.02810) establishes a tetrachotomy (exponential, $1/n$, , arbitrarily slow), with a richer spectrum of rates possible due to the absence of agnostic noise. The agnostic trichotomy is more compact: in the non-realizable case, the presence of unavoidable noise or model misspecification rules out the intermediate rate for ERM; unless is finite, no faster than rate can be guaranteed. This confirms that agnostic learning is strictly harder—fast rates require both limited class complexity and zero noise.
6. Implications and Applications
- Practical interpretation: For any in applied classification, the agnostic universal learning rate achieved by ERM is dictated purely by the VC dimension and finiteness. Thus, practitioners using ERM should expect, for general distributions:
- Exponential error decay if the class is finite (rare in practice).
- Strictly super-root convergence (but not exponential and not $1/n$) if is infinite but has finite VC.
- No universal guarantees if has infinite VC-dimension.
- Model selection: These trichotomy results justify the central importance of controlling class complexity—not just for worst-case PAC theory, but for universal, every-distribution (distribution-wise) learning curve guarantees.
- Theory: These results provide definitive negative evidence against the possibility of $1/n$ universal excess risk rates for ERM in agnostic learning—regardless of the particular infinite VC class chosen.
- Algorithm design: For improper learners or data-dependent methods, faster rates may sometimes be achievable in special noise/margin or target-adaptive regimes, but ERM is fundamentally limited according to this trichotomy.
Summary Table: Agnostic Universal Rates for ERM
Rate Regime | Class Characterization | Example |
---|---|---|
Finite threshold/interval classes | ||
, | Halfspaces, finite-degree polynomials | |
Arbitrarily slow | Unrestricted indicator classes |
References to Key Results
- The main trichotomy theorem: Section 3, "Agnostic universal rates for ERM, target-independent case."
- Target-dependent and Bayes-dependent trichotomies: Section 4, Theorem 2 (target-dependent), Theorem 3 (Bayes-dependent).
- Proofs, combinatorial definitions, and concrete class examples: throughout sections 3–5; for formal statements and detailed definitions of (VC-)eluder sequences centered at or the Bayes classifier, see the supplemental definitions.
Conclusion:
The work establishes a precise trichotomy for agnostic universal rates of ERM: exactly exponential, strictly super-root (but not $1/n$), or arbitrarily slow—fully characterized by standard class complexity (finite , VC-dimension) and new structural properties related to the neighborhood of optimal classifiers. This definitively resolves the structure of agnostic universal learning in ERM and provides clear compositional guidelines for statistical learning, model complexity control, and the design of robust empirical risk minimization systems.