Improving Dictionary Learning with Gated Sparse Autoencoders (2404.16014v2)
Abstract: Recent work has found that sparse autoencoders (SAEs) are an effective technique for unsupervised discovery of interpretable features in LLMs' (LMs) activations, by finding sparse, linear reconstructions of LM activations. We introduce the Gated Sparse Autoencoder (Gated SAE), which achieves a Pareto improvement over training with prevailing methods. In SAEs, the L1 penalty used to encourage sparsity introduces many undesirable biases, such as shrinkage -- systematic underestimation of feature activations. The key insight of Gated SAEs is to separate the functionality of (a) determining which directions to use and (b) estimating the magnitudes of those directions: this enables us to apply the L1 penalty only to the former, limiting the scope of undesirable side effects. Through training SAEs on LMs of up to 7B parameters we find that, in typical hyper-parameter ranges, Gated SAEs solve shrinkage, are similarly interpretable, and require half as many firing features to achieve comparable reconstruction fidelity.
- K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11):4311–4322, 2006. 10.1109/TSP.2006.881199.
- Anthropic AI. Introducing the next generation of Claude. https://www.anthropic.com/index/introducing-the-next-generation-of-claude, 2024. Accessed: 2024-04-14.
- Circuits Updates - March 2024. Transformer Circuits Thread, 2024. URL https://transformer-circuits.pub/2024/mar-update/index.html.
- Y. Bengio. Deep learning of representations: Looking forward, 2013.
- Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397–2430. PMLR, 2023.
- Language models can explain neurons in language models. https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html, 2023.
- J. Bloom. Open Source Sparse Autoencoders for all Residual Stream Layers of GPT-2 Small. https://www.alignmentforum.org/posts/f9EgfLSurAiqRJySD/open-source-sparse-autoencoders-for-all-residual-stream, 2024.
- T. Blumensath and M. E. Davies. Gradient pursuits. IEEE Transactions on Signal Processing, 56(6):2370–2382, 2008.
- An interpretability illusion for bert. arXiv preprint arXiv:2104.07143, 2021.
- Towards monosemanticity: Decomposing language models with dictionary learning. Transformer Circuits Thread, 2023. https://transformer-circuits.pub/2023/monosemantic-features/index.html.
- Isolating sources of disentanglement in variational autoencoders. Advances in neural information processing systems, 31, 2018.
- Infogan: Interpretable representation learning by information maximizing generative adversarial nets. Advances in neural information processing systems, 29, 2016.
- A. Conmy. My best guess at the important tricks for training 1L SAEs. https://www.lesswrong.com/posts/yJsLNWtmzcgPJgvro/my-best-guess-at-the-important-tricks-for-training-1l-saes, Dec 2023.
- Towards automated circuit discovery for mechanistic interpretability, 2023.
- Sparse autoencoders find highly interpretable features in language models, 2023.
- Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, page 933–941. JMLR.org, 2017.
- M. Elad. Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer, New York, 2010. ISBN 978-1-4419-7010-7. 10.1007/978-1-4419-7011-4.
- A mathematical framework for transformer circuits. Transformer Circuits Thread, 2021. URL https://transformer-circuits.pub/2021/framework/index.html.
- Softmax linear units. Transformer Circuits Thread, 2022a. https://transformer-circuits.pub/2022/solu/index.html.
- Toy Models of Superposition. arXiv preprint arXiv:2209.10652, 2022b.
- Jumprelu: A retrofit defense strategy for adversarial attacks, 2019.
- Gemini Team. Gemini: A Family of Highly Capable Multimodal Models. Rohan Anil and Sebastian Borgeaud and Yonghui Wu and Jean-Baptiste Alayrac and Jiahui Yu and Radu Soricut and Johan Schalkwyk and Andrew M Dai and Anja Hauth et. al, 2024.
- Gemma, 2024. URL https://www.kaggle.com/m/3301.
- W. Gurnee and M. Tegmark. Language models represent space and time, 2024.
- Finding neurons in a haystack: Case studies with sparse probing, 2023.
- Statistical Learning with Sparsity: The Lasso and Generalizations. CRC Press, Boca Raton, FL, 2015. ISBN 978-1-4987-1216-3. 10.1201/b18401.
- H. Kim and A. Mnih. Disentangling by factorising. In International conference on machine learning, pages 2649–2658. PMLR, 2018.
- Sparse autoencoders work on attention layer outputs. Alignment Forum, 2024a. URL https://www.alignmentforum.org/posts/DtdzGwFh9dCfsekZZ.
- Attention saes scale to gpt-2 small. Alignment Forum, 2024b. URL https://www.alignmentforum.org/posts/FSTRedtjuHa4Gfdbr.
- S. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397–3415, 1993. 10.1109/78.258082.
- Sparse feature circuits: Discovering and editing interpretable causal graphs in language models, 2024.
- Disentangling disentanglement in variational autoencoders. In International conference on machine learning, pages 4402–4412. PMLR, 2019.
- C. McDougall. SAE Visualizer. https://github.com/callummcdougall/sae_vis, 2024.
- Copy suppression: Comprehensively understanding an attention head, 2023.
- N. Nanda. My Interpretability-Friendly Models (in TransformerLens). https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=NCJ6zH_Okw_mUYAwGnMKsj2m, 2022.
- N. Nanda. Open Source Replication & Commentary on Anthropic’s Dictionary Learning Paper, Oct 2023. URL https://www.alignmentforum.org/posts/aPTgTKC45dWvL9XBF/open-source-replication-and-commentary-on-anthropic-s.
- Progress measures for grokking via mechanistic interpretability. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=9XFSbDPmdW.
- [Summary] Progress Update #1 from the GDM Mech Interp Team. Alignment Forum, 2024. URL https://www.alignmentforum.org/posts/HpAr8k74mW4ivCvCu/summary-progress-update-1-from-the-gdm-mech-interp-team.
- A. Ng. Sparse autoencoder. http://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf, 2011. CS294A Lecture notes.
- C. Olah. Mechanistic interpretability, variables, and the importance of interpretable bases. https://www.transformer-circuits.pub/2022/mech-interp-essay, 2022.
- Zoom in: An introduction to circuits. Distill, 2020. 10.23915/distill.00024.001.
- Circuits Updates - May 2023. Transformer Circuits Thread, 2023. URL https://transformer-circuits.pub/2023/may-update/index.html.
- Circuits Updates - January 2024. Transformer Circuits Thread, 2024. URL https://transformer-circuits.pub/2024/jan-update/index.html.
- Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision Research, 37(23):3311–3325, 1997. 10.1016/S0042-6989(97)00169-7.
- In-context learning and induction heads, 2022. URL https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html.
- OpenAI. GPT-4 Technical Report, 2023.
- The linear representation hypothesis and the geometry of large language models, 2023.
- Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, pages 40–44 vol.1, 1993. 10.1109/ACSSC.1993.342465.
- [interim research report] taking features out of superposition with sparse autoencoders. https://www.alignmentforum.org/posts/z6QQJbtpkEAX3Aojj/interim-research-report-taking-features-out-of-superposition, 2022.
- N. Shazeer. GLU variants improve transformer. CoRR, abs/2002.05202, 2020. URL https://arxiv.org/abs/2002.05202.
- Axiomatic attribution for deep networks. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3319–3328. PMLR, 2017. URL http://proceedings.mlr.press/v70/sundararajan17a.html.
- G. M. Taggart. Prolu: A pareto improvement for sparse autoencoders. https://www.lesswrong.com/posts/HEpufTdakGTTKgoYF/prolu-a-pareto-improvement-for-sparse-autoencoders, 2024.
- Codebook features: Sparse and discrete interpretability for neural networks, 2023.
- Circuits Updates - February 2024. Transformer Circuits Thread, 2024. URL https://transformer-circuits.pub/2024/feb-update/index.html.
- S. J. Thorpe. Local vs. distributed coding. Intellectica, 8:3–40, 1989.
- R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996. 10.1111/j.2517-6161.1996.tb02080.x.
- Linear representations of sentiment in large language models, 2023.
- Activation addition: Steering language models without optimization, 2023.
- Interpretability in the wild: a circuit for indirect object identification in GPT-2 small. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=NpsVSN6o4ul.
- B. Wright and L. Sharkey. Addressing feature suppression in saes. https://www.alignmentforum.org/posts/3JuSjTZyMzaSeTxKk/addressing-feature-suppression-in-saes, Feb 2024.
- Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors, 2023.