Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces
Abstract: Deep neural networks (DNNs) usually contain massive parameters, but there is redundancy such that it is guessed that the DNNs could be trained in low-dimensional subspaces. In this paper, we propose a Dynamic Linear Dimensionality Reduction (DLDR) based on low-dimensional properties of the training trajectory. The reduction is efficient, which is supported by comprehensive experiments: optimization in 40 dimensional spaces can achieve comparable performance as regular training over thousands or even millions of parameters. Since there are only a few optimization variables, we develop a quasi-Newton-based algorithm and also obtain robustness against label noises, which are two follow-up experiments to show the advantages of finding low-dimensional subspaces.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.