Papers
Topics
Authors
Recent
2000 character limit reached

Feature Learning Beyond the Edge of Stability (2502.13110v2)

Published 18 Feb 2025 in cs.LG

Abstract: We propose a homogeneous multilayer perceptron parameterization with polynomial hidden layer width pattern and analyze its training dynamics under stochastic gradient descent with depthwise gradient scaling in a general supervised learning scenario. We obtain formulas for the first three Taylor coefficients of the minibatch loss during training that illuminate the connection between sharpness and feature learning, providing in particular a soft rank variant that quantifies the quality of learned hidden layer features. Based on our theory, we design a gradient scaling scheme that in tandem with a quadratic width pattern enables training beyond the edge of stability without loss explosions or numerical errors, resulting in improved feature learning and implicit sharpness regularization as demonstrated empirically.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.