2000 character limit reached
On Implications of Scaling Laws on Feature Superposition (2407.01459v1)
Published 1 Jul 2024 in cs.LG and cs.AI
Abstract: Using results from scaling laws, this theoretical note argues that the following two statements cannot be simultaneously true: 1. Superposition hypothesis where sparse features are linearly represented across a layer is a complete theory of feature representation. 2. Features are universal, meaning two models trained on the same data and achieving equal performance will learn identical features.