Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The generalization error of random features regression: Precise asymptotics and double descent curve (1908.05355v5)

Published 14 Aug 2019 in math.ST, stat.ML, and stat.TH

Abstract: Deep learning methods operate in regimes that defy the traditional statistical mindset. Neural network architectures often contain more parameters than training samples, and are so rich that they can interpolate the observed labels, even if the latter are replaced by pure noise. Despite their huge complexity, the same architectures achieve small generalization error on real data. This phenomenon has been rationalized in terms of a so-called `double descent' curve. As the model complexity increases, the test error follows the usual U-shaped curve at the beginning, first decreasing and then peaking around the interpolation threshold (when the model achieves vanishing training error). However, it descends again as model complexity exceeds this threshold. The global minimum of the test error is found above the interpolation threshold, often in the extreme overparametrization regime in which the number of parameters is much larger than the number of samples. Far from being a peculiar property of deep neural networks, elements of this behavior have been demonstrated in much simpler settings, including linear regression with random covariates. In this paper we consider the problem of learning an unknown function over the $d$-dimensional sphere $\mathbb S{d-1}$, from $n$ i.i.d. samples $(\boldsymbol x_i, y_i)\in \mathbb S{d-1} \times \mathbb R$, $i\le n$. We perform ridge regression on $N$ random features of the form $\sigma(\boldsymbol w_a{\mathsf T} \boldsymbol x)$, $a\le N$. This can be equivalently described as a two-layers neural network with random first-layer weights. We compute the precise asymptotics of the test error, in the limit $N,n,d\to \infty$ with $N/d$ and $n/d$ fixed. This provides the first analytically tractable model that captures all the features of the double descent phenomenon without assuming ad hoc misspecification structures.

Citations (580)

Summary

  • The paper provides explicit asymptotic formulas for test error in random features regression, capturing the double descent phenomenon.
  • It shows that optimal generalization occurs above the interpolation threshold by quantifying bias and variance across varying signal-to-noise ratios.
  • Extensive numerical simulations validate the theoretical predictions, offering insights for model complexity selection in high-dimensional settings.

Overview: Generalization Error in Random Features Regression

This paper investigates random features regression with a focus on the generalization error and the double descent phenomenon. The authors compute precise asymptotic expressions for the test error in the limit where the number of parameters, samples, and dimensionality all tend to infinity, with fixed proportionality ratios. Their findings contribute to a deeper understanding of model overparametrization and its effects on generalization performance.

Double Descent Phenomenon

The work examines the "double descent" test error curve, characterized by an initial decrease of error as model complexity increases, followed by a peak at the interpolation threshold, and a subsequent descent as overparametrization continues. The authors describe this scenario entirely within the random features model, covering both linear and nonlinear signal cases, and providing rigorous analytics without relying on misspecification structures.

Key Insights and Results

  1. Analytically Tractable Model: This paper is notable for offering a model that encapsulates all features of the double descent phenomenon. It provides explicit formulae for the asymptotic test error and scrutinizes the behavior across various signal-to-noise ratios (SNRs).
  2. Generalization Error: The test error's global minimum is shown to occur above the interpolation threshold, often in highly overparametrized regimes. The authors quantify the bias and variance components of the error, demonstrating the crucial role of these elements in the double descent.
  3. Overparametrization and Regularization: The paper reveals that extreme overparametrization can lead to optimal generalization, highlighting regimes where traditional regularization might be counterproductive, depending on the SNR.
  4. Numerical Validation: Extensive numerical simulations are provided, aligning with theoretical predictions and offering empirical evidence for the proposed models and asymptotic expressions.

Implications and Future Directions

The implications of this work are twofold: practical and theoretical. Practically, it informs strategies for selecting model complexity in machine learning systems, especially in high-dimensional data settings. Theoretically, it deepens the comprehension of overparametrization's impact on learning efficacy.

Future research directions may include:

  • Exploring further the connection between random feature models, neural networks, and kernel methods.
  • Extending the paper to different activation functions and model architectures.

Conclusion

This research provides a comprehensive analysis of generalization errors within random features regression, offering a robust framework to understand the intricate balance between bias and variance across different parametrization regimes. Its findings are significant for both the advancement of theoretical machine learning and the practical optimization of model selection processes.

Youtube Logo Streamline Icon: https://streamlinehq.com