Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 189 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Microscopic and collective signatures of feature learning in neural networks (2508.20989v1)

Published 28 Aug 2025 in cond-mat.dis-nn and cond-mat.stat-mech

Abstract: Feature extraction - the ability to identify relevant properties of data - is a key factor underlying the success of deep learning. Yet, it has proved difficult to elucidate its nature within existing predictive theories, to the extent that there is no consensus on the very definition of feature learning. A promising hint in this direction comes from previous phenomenological observations of quasi-universal aspects in the training dynamics of neural networks, displayed by simple properties of feature geometry. We address this problem within a statistical-mechanics framework for Bayesian learning in one hidden layer neural networks with standard parameterization. Analytical computations in the proportional limit (when both the network width and the size of the training set are large) can quantify fingerprints of feature learning, both collective ones (related to manifold geometry) and microscopic ones (related to the weights). In particular, (i) the distance between different class manifolds in feature space is a nonmonotonic function of the temperature, which we interpret as the equilibrium counterpart of a phenomenon observed under gradient descent (GD) dynamics, and (ii) the microscopic learnable parameters in the network undergo a finite data-dependent displacement with respect to the infinite-width limit, and develop correlations. These results indicate that nontrivial feature learning is at play in a regime where the posterior predictive distribution is that of Gaussian process regression with a trivially rescaled prior.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.