Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Dictionary Learning with Gated Sparse Autoencoders (2404.16014v2)

Published 24 Apr 2024 in cs.LG and cs.AI

Abstract: Recent work has found that sparse autoencoders (SAEs) are an effective technique for unsupervised discovery of interpretable features in LLMs' (LMs) activations, by finding sparse, linear reconstructions of LM activations. We introduce the Gated Sparse Autoencoder (Gated SAE), which achieves a Pareto improvement over training with prevailing methods. In SAEs, the L1 penalty used to encourage sparsity introduces many undesirable biases, such as shrinkage -- systematic underestimation of feature activations. The key insight of Gated SAEs is to separate the functionality of (a) determining which directions to use and (b) estimating the magnitudes of those directions: this enables us to apply the L1 penalty only to the former, limiting the scope of undesirable side effects. Through training SAEs on LMs of up to 7B parameters we find that, in typical hyper-parameter ranges, Gated SAEs solve shrinkage, are similarly interpretable, and require half as many firing features to achieve comparable reconstruction fidelity.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11):4311–4322, 2006. 10.1109/TSP.2006.881199.
  2. Anthropic AI. Introducing the next generation of Claude. https://www.anthropic.com/index/introducing-the-next-generation-of-claude, 2024. Accessed: 2024-04-14.
  3. Circuits Updates - March 2024. Transformer Circuits Thread, 2024. URL https://transformer-circuits.pub/2024/mar-update/index.html.
  4. Y. Bengio. Deep learning of representations: Looking forward, 2013.
  5. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397–2430. PMLR, 2023.
  6. Language models can explain neurons in language models. https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html, 2023.
  7. J. Bloom. Open Source Sparse Autoencoders for all Residual Stream Layers of GPT-2 Small. https://www.alignmentforum.org/posts/f9EgfLSurAiqRJySD/open-source-sparse-autoencoders-for-all-residual-stream, 2024.
  8. T. Blumensath and M. E. Davies. Gradient pursuits. IEEE Transactions on Signal Processing, 56(6):2370–2382, 2008.
  9. An interpretability illusion for bert. arXiv preprint arXiv:2104.07143, 2021.
  10. Towards monosemanticity: Decomposing language models with dictionary learning. Transformer Circuits Thread, 2023. https://transformer-circuits.pub/2023/monosemantic-features/index.html.
  11. Isolating sources of disentanglement in variational autoencoders. Advances in neural information processing systems, 31, 2018.
  12. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. Advances in neural information processing systems, 29, 2016.
  13. A. Conmy. My best guess at the important tricks for training 1L SAEs. https://www.lesswrong.com/posts/yJsLNWtmzcgPJgvro/my-best-guess-at-the-important-tricks-for-training-1l-saes, Dec 2023.
  14. Towards automated circuit discovery for mechanistic interpretability, 2023.
  15. Sparse autoencoders find highly interpretable features in language models, 2023.
  16. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, page 933–941. JMLR.org, 2017.
  17. M. Elad. Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer, New York, 2010. ISBN 978-1-4419-7010-7. 10.1007/978-1-4419-7011-4.
  18. A mathematical framework for transformer circuits. Transformer Circuits Thread, 2021. URL https://transformer-circuits.pub/2021/framework/index.html.
  19. Softmax linear units. Transformer Circuits Thread, 2022a. https://transformer-circuits.pub/2022/solu/index.html.
  20. Toy Models of Superposition. arXiv preprint arXiv:2209.10652, 2022b.
  21. Jumprelu: A retrofit defense strategy for adversarial attacks, 2019.
  22. Gemini Team. Gemini: A Family of Highly Capable Multimodal Models. Rohan Anil and Sebastian Borgeaud and Yonghui Wu and Jean-Baptiste Alayrac and Jiahui Yu and Radu Soricut and Johan Schalkwyk and Andrew M Dai and Anja Hauth et. al, 2024.
  23. Gemma, 2024. URL https://www.kaggle.com/m/3301.
  24. W. Gurnee and M. Tegmark. Language models represent space and time, 2024.
  25. Finding neurons in a haystack: Case studies with sparse probing, 2023.
  26. Statistical Learning with Sparsity: The Lasso and Generalizations. CRC Press, Boca Raton, FL, 2015. ISBN 978-1-4987-1216-3. 10.1201/b18401.
  27. H. Kim and A. Mnih. Disentangling by factorising. In International conference on machine learning, pages 2649–2658. PMLR, 2018.
  28. Sparse autoencoders work on attention layer outputs. Alignment Forum, 2024a. URL https://www.alignmentforum.org/posts/DtdzGwFh9dCfsekZZ.
  29. Attention saes scale to gpt-2 small. Alignment Forum, 2024b. URL https://www.alignmentforum.org/posts/FSTRedtjuHa4Gfdbr.
  30. S. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397–3415, 1993. 10.1109/78.258082.
  31. Sparse feature circuits: Discovering and editing interpretable causal graphs in language models, 2024.
  32. Disentangling disentanglement in variational autoencoders. In International conference on machine learning, pages 4402–4412. PMLR, 2019.
  33. C. McDougall. SAE Visualizer. https://github.com/callummcdougall/sae_vis, 2024.
  34. Copy suppression: Comprehensively understanding an attention head, 2023.
  35. N. Nanda. My Interpretability-Friendly Models (in TransformerLens). https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=NCJ6zH_Okw_mUYAwGnMKsj2m, 2022.
  36. N. Nanda. Open Source Replication & Commentary on Anthropic’s Dictionary Learning Paper, Oct 2023. URL https://www.alignmentforum.org/posts/aPTgTKC45dWvL9XBF/open-source-replication-and-commentary-on-anthropic-s.
  37. Progress measures for grokking via mechanistic interpretability. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=9XFSbDPmdW.
  38. [Summary] Progress Update #1 from the GDM Mech Interp Team. Alignment Forum, 2024. URL https://www.alignmentforum.org/posts/HpAr8k74mW4ivCvCu/summary-progress-update-1-from-the-gdm-mech-interp-team.
  39. A. Ng. Sparse autoencoder. http://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf, 2011. CS294A Lecture notes.
  40. C. Olah. Mechanistic interpretability, variables, and the importance of interpretable bases. https://www.transformer-circuits.pub/2022/mech-interp-essay, 2022.
  41. Zoom in: An introduction to circuits. Distill, 2020. 10.23915/distill.00024.001.
  42. Circuits Updates - May 2023. Transformer Circuits Thread, 2023. URL https://transformer-circuits.pub/2023/may-update/index.html.
  43. Circuits Updates - January 2024. Transformer Circuits Thread, 2024. URL https://transformer-circuits.pub/2024/jan-update/index.html.
  44. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision Research, 37(23):3311–3325, 1997. 10.1016/S0042-6989(97)00169-7.
  45. In-context learning and induction heads, 2022. URL https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html.
  46. OpenAI. GPT-4 Technical Report, 2023.
  47. The linear representation hypothesis and the geometry of large language models, 2023.
  48. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, pages 40–44 vol.1, 1993. 10.1109/ACSSC.1993.342465.
  49. [interim research report] taking features out of superposition with sparse autoencoders. https://www.alignmentforum.org/posts/z6QQJbtpkEAX3Aojj/interim-research-report-taking-features-out-of-superposition, 2022.
  50. N. Shazeer. GLU variants improve transformer. CoRR, abs/2002.05202, 2020. URL https://arxiv.org/abs/2002.05202.
  51. Axiomatic attribution for deep networks. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3319–3328. PMLR, 2017. URL http://proceedings.mlr.press/v70/sundararajan17a.html.
  52. G. M. Taggart. Prolu: A pareto improvement for sparse autoencoders. https://www.lesswrong.com/posts/HEpufTdakGTTKgoYF/prolu-a-pareto-improvement-for-sparse-autoencoders, 2024.
  53. Codebook features: Sparse and discrete interpretability for neural networks, 2023.
  54. Circuits Updates - February 2024. Transformer Circuits Thread, 2024. URL https://transformer-circuits.pub/2024/feb-update/index.html.
  55. S. J. Thorpe. Local vs. distributed coding. Intellectica, 8:3–40, 1989.
  56. R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996. 10.1111/j.2517-6161.1996.tb02080.x.
  57. Linear representations of sentiment in large language models, 2023.
  58. Activation addition: Steering language models without optimization, 2023.
  59. Interpretability in the wild: a circuit for indirect object identification in GPT-2 small. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=NpsVSN6o4ul.
  60. B. Wright and L. Sharkey. Addressing feature suppression in saes. https://www.alignmentforum.org/posts/3JuSjTZyMzaSeTxKk/addressing-feature-suppression-in-saes, Feb 2024.
  61. Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors, 2023.
Citations (40)

Summary

  • The paper introduces a gated sparse autoencoder that decouples feature detection and magnitude estimation to overcome sparsity bias and improve reconstruction fidelity.
  • The methodology employs a gated mechanism with dual affine transformations and a double loss function to mitigate shrinkage and enhance feature estimation.
  • Benchmarking reveals Pareto improvements over baseline SAEs, demonstrating robust performance in sparsity management and reconstruction accuracy with maintained interpretability.

Enhancements in Sparse Autoencoder Architectures with Gated SAEs

Introduction

Sparse autoencoders (SAEs) are utilized to decompose model activations into sparse, linear combinations of feature directions, facilitating interpretability in neural networks. Traditional SAEs, while useful, are limited by the L1 sparsity penalty which biases the reconstruction fidelity. The newly introduced Gated Sparse Autoencoder (Gated SAE) architecture aims to mitigate these limitations by decoupling feature detection from magnitude estimation, potentially leading to more faithful reconstructions and better sparsity management.

Enhancements in Gated SAE Architecture

The core innovation in Gated SAEs lies in its architecture which tweaks the traditional sparse autoencoder (SAE) design. The encoder in Gated SAEs is offset into two distinctive roles—detecting active features and estimating their magnitudes. The encoder utilizes a gated mechanism that employs separate affine transformations for these tasks, applying the sparsity penalty exclusively to feature detection.

Key Architectural Details:

  • Gated Mechanism: Incorporates separate paths for feature detection (using a thresholding gate) and magnitude estimation (using a traditional ReLU).
  • Weight Sharing: Partial weight sharing between transformations controls the increase in parameter count, ensuring efficiency.
  • Double Loss Function: An auxiliary loss function facilitates correct feature magnitude estimations without enforcing sparsity, which directly addresses the shrinkage issue in Baseline SAEs.

Benchmarking Performance

Gated SAEs were rigorously evaluated against baseline SAEs across various models and layers within those models. The improvements were measured based on two primary metrics: sparsity (L0 measure) and reconstruction fidelity (loss recovered).

Performance Insights:

  • Pareto Improvements: Gated SAEs consistently demonstrated Pareto improvements over baseline SAEs in terms of sparsity and reconstruction fidelity.
  • Shrinkage Overcoming: Unlike Baseline SAEs, Gated SAEs exhibited negligible shrinkage effects, thanks to the novel loss function catering specifically to the decoder pathway.
  • Interpretability: Preliminary user studies on feature interpretability show that Gated SAEs perform comparably to baseline SAEs, suggesting no loss of interpretability despite architectural complexities.

Theoretical and Practical Implications

The implementation of Gated SAEs presents both theoretical and practical advances in the field of neural network interpretability. Theoretically, it offers a refined understanding of how to manage sparsity and fidelity in reconstructions without succumbing to biases like shrinkage. Practically, it provides a more robust tool for dissecting and understanding neural network operations, thereby possibly enhancing the accuracy and utility of interpretative outputs in real-world applications.

Future Directions

Looking ahead, the research around Gated SAEs could expand into larger models and diverse neural architectures, assessing scalability and effectiveness. Future studies might also explore the detailed comparison of feature interpretability between different SAE architectures to solidify understandings of how architectural nuances affect practical interpretability outcomes.

Conclusion

The development of Gated SAEs marks a significant step toward overcoming some of the intrinsic limitations posed by baseline SAE architectures, primarily through innovative architectural modifications and training strategies. This advancement paves the way for more accurate, scalable, and interpretable representations in machine learning models, aligning with the broader goals of improving transparency and reliability in AI systems.