ParaScopes: What do Language Models Activations Encode About Future Text? (2511.00180v1)
Abstract: Interpretability studies in LLMs often investigate forward-looking representations of activations. However, as LLMs become capable of doing ever longer time horizon tasks, methods for understanding activations often remain limited to testing specific concepts or tokens. We develop a framework of Residual Stream Decoders as a method of probing model activations for paragraph-scale and document-scale plans. We test several methods and find information can be decoded equivalent to 5+ tokens of future context in small models. These results lay the groundwork for better monitoring of LLMs and better understanding how they might encode longer-term planning information.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.