Layered Models can "Automatically" Regularize and Discover Low-Dimensional Structures via Feature Learning
Abstract: Layered models like neural networks appear to extract key features from data through empirical risk minimization, yet the theoretical understanding for this process remains unclear. Motivated by these observations, we study a two-layer nonparametric regression model where the input undergoes a linear transformation followed by a nonlinear mapping to predict the output, mirroring the structure of two-layer neural networks. In our model, both layers are optimized jointly through empirical risk minimization, with the nonlinear layer modeled by a reproducing kernel Hilbert space induced by a rotation and translation invariant kernel, regularized by a ridge penalty. Our main result shows that the two-layer model can "automatically" induce regularization and facilitate feature learning. Specifically, the two-layer model promotes dimensionality reduction in the linear layer and identifies a parsimonious subspace of relevant features -- even without applying any norm penalty on the linear layer. Notably, this regularization effect arises directly from the model's layered structure, independent of optimization dynamics. More precisely, assuming the covariates have nonzero explanatory power for the response only through a low dimensional subspace (central mean subspace), the linear layer consistently estimates both the subspace and its dimension. This demonstrates that layered models can inherently discover low-complexity solutions relevant for prediction, without relying on conventional regularization methods. Real-world data experiments further demonstrate the persistence of this phenomenon in practice.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.