Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 86 tok/s
Gemini 2.5 Pro 60 tok/s Pro
GPT-5 Medium 28 tok/s
GPT-5 High 34 tok/s Pro
GPT-4o 72 tok/s
GPT OSS 120B 441 tok/s Pro
Kimi K2 200 tok/s Pro
2000 character limit reached

Can sparse autoencoders make sense of latent representations? (2410.11468v2)

Published 15 Oct 2024 in cs.LG

Abstract: Sparse autoencoders (SAEs) have lately been used to uncover interpretable latent features in LLMs. Here, we explore their potential for decomposing latent representations in complex and high-dimensional biological data, where the underlying variables are often unknown. Using simulated data, we find that latent representations can encode observable and directly connected upstream hidden variables in superposition. The degree to which they are learned depends on the type of variable and the model architecture, favoring shallow and wide networks. Superpositions, however, are not identifiable if the generative variables are unknown. SAEs can recover these variables and their structure with respect to the observables. Applied to single-cell multi-omics data, we show that SAEs can uncover key biological processes. We further present an automated method for linking SAE features to biological concepts to enable large-scale analysis of single-cell expression models.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

X Twitter Logo Streamline Icon: https://streamlinehq.com