Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Two models of double descent for weak features (1903.07571v2)

Published 18 Mar 2019 in cs.LG and stat.ML

Abstract: The "double descent" risk curve was proposed to qualitatively describe the out-of-sample prediction accuracy of variably-parameterized machine learning models. This article provides a precise mathematical analysis for the shape of this curve in two simple data models with the least squares/least norm predictor. Specifically, it is shown that the risk peaks when the number of features $p$ is close to the sample size $n$, but also that the risk decreases towards its minimum as $p$ increases beyond $n$. This behavior is contrasted with that of "prescient" models that select features in an a priori optimal order.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Mikhail Belkin (76 papers)
  2. Daniel Hsu (107 papers)
  3. Ji Xu (80 papers)
Citations (358)

Summary

Analysis of Double Descent in Statistical Learning

The paper "Two Models of Double Descent for Weak Features" by Mikhail Belkin, Daniel Hsu, and Ji Xu offers a rigorous mathematical exploration of the "double descent" risk curve in machine learning. This risk curve extends the classical bias-variance trade-off, incorporating behaviors manifesting in models that interpolate training data. This revisit is pivotal for models with parameter counts exceeding the sample size, specifically elucidating risk behaviors in such parameter regimes. The authors delve into two principal data models — a Gaussian model and a Fourier series model — using the least squares/least norm predictor to investigate double descent's implications.

Key Contributions

  1. Mathematical Analysis of Double Descent: The authors substantiate the double descent risk curve through precise mathematical modeling in two simple scenarios. They demonstrate the increase in test risk when the parameter count equals the sample size and its subsequent decrease as this count further increases, reaffirming the double descent hypothesis put forth by \citet*{belkin2019reconciling}.
  2. Gaussian and Fourier Models:
    • Gaussian Model: The paper introduces this model inspired by early work by \citet{breiman1983many} for cases with pnp \leq n. It extends this framework to the over-parameterized case (pnp \geq n), providing non-asymptotic risk expressions. Findings reveal that with a sufficiently high signal-to-noise ratio, the risk minimum occurs once pp exceeds nn, aligning with observed machine learning outcomes in practice.
    • Fourier Series Model: This noise-free model highlights features as random samples from Fourier transforms on the circle, suitable to represent a scenario of infinite weak features. The authors demonstrate that risk diminishes as parameters increase beyond the number of samples, portraying the universal applicability of the descent.
  3. Analysis of Feature Selection: The paper emphasizes scenarios both with uninformed (random) and 'prescient' (informed) feature selection. Uninformed scenarios realize double descent visibly, whereas prescient approaches confirm classical outcomes where the optimal parameter count is less than nn achieving a bias-variance compromise.
  4. Concentration and Stability: Non-asymptotic concentration results offer insights into the distribution of risks. Robust confidence bounds delineate risk fluctuation delicateness during sample variability, underpinning their statistical assertions relevant in high-dimensional contexts.

Implications

The methodological focus of the paper suggests a broader understanding of model architectures as they pertain to feature count surpassing data samples — common in machine learning models today. Theoretical evidences propose that uninformed over-parameterization, often viable in applications featuring numerous weak features (such as neural networks), can succeed due to inherent data interpolative properties, guiding optimal model overfitting strategies pragmatically.

Future Directions

The demonstrated mathematical paradigms enrich the framework for understanding over-parameterized models' behavior. This paper's findings invite further exploration in:

  • Extensive high-dimensional dataset experimentation to align theoretical predictions with empirical outcomes.
  • Integrating advanced learning paradigms like deep learning under different noise conditions.
  • Probing interactions between feature engineering and model capacity expansion, reinforcing the balance between theory and application in AI.

In these regards, the exposition provides fertile ground for ongoing advancements in statistical learning theory, evidencing profound interrelations between statistical rigor and model architecture innovation. The exploration of double descent portrays a critical inflection in how features, parameters, and data quantities interplay amidst the vast landscape of modern machine learning.