Papers
Topics
Authors
Recent
2000 character limit reached

Neural Feature Dynamics in Deep Networks

Updated 25 December 2025
  • Neural Feature Dynamics is a mathematical framework that characterizes the evolution of learned features and gradients in deep networks as their width and depth increase.
  • It employs maximal-update parameterization and depth scaling to maintain O(1) feature evolution, enabling robust hyperparameter transfer and consistent scaling law predictions.
  • NFD highlights failure modes such as internal collapse in multi-layer residual blocks and recommends depth-aware learning rate corrections to stabilize training.

Neural Feature Dynamics (NFD) refers to the formal mathematical framework that characterizes the evolution of learned features in deep neural networks, particularly within the regime where both width and depth of the architecture become large simultaneously. NFD provides an analytic lens on end-to-end feature learning, scaling behaviors, and the validity of critical assumptions such as gradient-independence, thereby offering principled insight into when and why deep networks obey or deviate from classic scaling laws (Yao et al., 24 Dec 2025).

1. Foundations: Parameterization and Feature Learning Regimes

The development of NFD is tightly linked to the maximal-update parameterization (MuP), which prescribes weight and learning-rate scaling so that feature evolution remains nontrivial in the infinite-width limit. For an LL-layer ResNet of width nn, the original MuP framework ensures that the feature activations and their updates remain O(1)O(1) as n→∞n \to \infty, enabling reliable hyperparameter transfer across widths (Noci et al., 27 Feb 2024).

Depth-MuP generalizes this to depth by re-scaling the residual branch by a factor T/LT/L, where TT is a time-horizon parameter, leading to update dynamics for the hidden states: he=he−1+TL1nWeϕ(he−1).h_e = h_{e-1} + \frac{T}{L} \frac{1}{\sqrt n} W_e \phi(h_{e-1}). This scaling is essential for maintaining meaningful feature evolution as L→∞L \to \infty and, in single-layer residual blocks, supports empirical and theoretical consistency across varying network depths (Yao et al., 24 Dec 2025).

2. Derivation of Neural Feature Dynamics in the Joint Limit

Neural Feature Dynamics rigorously describes the evolution of both forward activations and backward gradients in the joint limit n,L→∞n, L \to \infty. The discrete-layer, discrete-step stochastic training dynamics converge to a coupled forward–backward McKean–Vlasov stochastic differential system: dht(k)=−nc∑i=0k−1L′(f(i),y(i)) E[ϕ(ht(k))ϕ(ht(i))] gt(i) dt+dWt(k), dgt(k)=−nc∑i=0k−1L′(f(i),y(i)) E[gt(k)gt(i)] ϕ(ht(i)) dt+dW~t(k).\begin{aligned} dh_t^{(k)} &= -n_c \sum_{i=0}^{k-1} \mathcal L'(f^{(i)}, y^{(i)})\, \mathbb E[\phi(h_t^{(k)}) \phi(h_t^{(i)})]\, g_t^{(i)}\,dt + dW_t^{(k)}, \ dg_t^{(k)} &= -n_c \sum_{i=0}^{k-1} \mathcal L'(f^{(i)}, y^{(i)})\, \mathbb E[g_t^{(k)} g_t^{(i)}]\, \phi(h_t^{(i)})\,dt + d\widetilde W_t^{(k)}. \end{aligned} Here, ht(k)h_t^{(k)} and gt(k)g_t^{(k)} denote the continuum-depth features and their backpropagated gradients, while the stochastic terms are governed by Brownian motions with covariances matching neural kernel and gradient statistics.

This description admits existence and uniqueness, and finite-width/depth ResNets converge at O(1/n+1/L)O(1/n+1/L). The key structural result is that forward-backward couplings, which violate the gradient-independence assumption (GIA) in standard regimes, vanish as L→∞L \to \infty under depth-MuP scaling, thereby restoring the validity of GIA and rendering the analysis tractable (Yao et al., 24 Dec 2025).

3. Implications for Scaling Laws and Representational Capacity

Within the NFD framework, the representational capacity in the deep-and-wide limit is characterized by the reproducing kernel Hilbert space (RKHS) induced by the Neural Network Gaussian Process (NNGP) kernel associated with the limiting SDE. In this regime, training/test loss convergence is governed by O(n−1+L−1)O(n^{-1}+L^{-1}) corrections, and further scaling beyond this regime exhibits strictly diminishing returns—an analytic explanation for empirical plateaus in scaling-law gains (Yao et al., 24 Dec 2025).

Moreover, increasing the time-horizon parameter TT can enlarge the RKHS and thus the realizable class of functions, but this comes with increased risk of training instability, reflecting optimization–capacity tradeoffs in ultra-deep regimes.

4. Failure Modes: Collapse in Multi-layer Residual Blocks

Depth-MuP and NFD are only structurally valid for single-layer residual blocks. For two (or more) internal layers per block (as in standard Transformer or pre-activation ResNet architectures), a vanishing mechanism induced by residual scaling $1/L$ suppresses the feature updates in the first internal layer: xenew−xe=O(TncL).x_e^{\text{new}}-x_e = O\left(\frac{T n_c}{L}\right). This term vanishes as L→∞L\to\infty, leading to "internal collapse": the first internal layer ceases to contribute to feature learning, while the outer residual transition persists. Empirically, this collapse explains failure of depth-MuP to transfer hyperparameters in models with multi-layer block structures, and marks a fundamental breakdown of NFD-derived scaling predictions in such settings (Yao et al., 24 Dec 2025).

5. Remediation: Depth-aware Learning Rate Correction

The analysis of NFD dynamics reveals that internal collapse can be counteracted by a depth-aware learning-rate schedule. By multiplying the learning rate for the first internal layer by LL, i.e. setting

ηe,1=ncnL,ηe,2=ncn,\eta_{e,1} = n_c n L, \quad \eta_{e,2} = n_c n,

the suppressed feature update is restored to O(1)O(1) across all depths, enabling non-vanishing learning and re-establishing robust hyperparameter transfer as depth increases. Empirically, this correction restores the benefits of scaling and NFD-predicted trends in deep ResNets across CIFAR-10 and similar benchmarks (Yao et al., 24 Dec 2025).

6. Practical Guidelines and Theoretical Significance

NFD provides rigorous, actionable principles for large-scale network training:

  • Use pre-activation ResNets and depth-MuP (T/LT/L residual scaling) for scaling-consistent feature learning.
  • Tune the core learning-rate parameter ncn_c on small-depth models and deploy to large depths without re-tuning for single-layer blocks.
  • In multi-layer residual blocks, employ depth-aware learning-rate scaling for the first internal layer to avoid collapse.
  • For additional capacity beyond width/depth scaling, modestly raise the time-horizon TT with care for stability.
  • Recognize that further scaling past the joint infinite limit leads to diminishing returns predictable from the NFD SDE's RKHS structure.

The introduction of NFD thus establishes a mathematically tractable regime for analyzing feature learning and transfer in overparameterized deep networks, clarifying both the capacity boundaries and structural pitfalls of modern scaling practice (Yao et al., 24 Dec 2025, Noci et al., 27 Feb 2024).

7. Broader Context and Ongoing Research Directions

NFD stands in contrast to classical Neural Tangent Kernel (NTK) scaling, where kernel evolution is frozen and feature learning is absent in the infinite-width limit. Under NTK scaling, curvature properties and optimal hyperparameters diverge with model size, in contrast to depth-MuP/NFD's width- and depth-consistent dynamics—phenomena confirmed across architectures including ResNets, Vision Transformers, and LLMs (Noci et al., 27 Feb 2024).

Continued investigation into extensions of NFD for architectures with deeper residual blocks, attention mechanisms, and nontrivial skip connections remains an active research area. Open questions include generalizing NFD beyond pre-activation forms and rigorously quantifying instability regimes at extreme time horizons or learning rates.

Regime/Architecture NFD Validity Hyperparameter Transfer Failure Mode
Single-layer residual block Valid (joint limit) Yes None
Multi-layer residual block Collapse in first int. No (unless corrected) Vanishing feature updates
NTK/standard parameterization Not applicable No Frozen kernel

NFD therefore constitutes a foundational analytic tool for understanding the emergence, limitations, and optimality conditions of feature learning in the modern deep-learning scaling paradigm (Yao et al., 24 Dec 2025, Noci et al., 27 Feb 2024).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Neural Feature Dynamics (NFD).