Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 105 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 45 tok/s
GPT-5 High 34 tok/s Pro
GPT-4o 108 tok/s
GPT OSS 120B 473 tok/s Pro
Kimi K2 218 tok/s Pro
2000 character limit reached

Examining Two Hop Reasoning Through Information Content Scaling (2502.03490v2)

Published 5 Feb 2025 in cs.AI and cs.LG

Abstract: Prior work has found that transformers have an inconsistent ability to learn to answer latent two-hop questions -- questions of the form "Who is Bob's mother's boss?" We study why this is the case by examining how transformers' capacity to learn datasets of two-hop questions and answers (two-hop QA) scales with their size, motivated by prior work on transformer knowledge capacity for simple factual memorization. We find that capacity scaling and generalization both support the hypothesis that latent two-hop QA requires transformers to learn each fact twice, while two-hop QA with chain of thought does not. We also show that with appropriate dataset parameters, it is possible to "trap" very small models in a regime where they memorize answers to two-hop questions independently, even though they would perform better if they could learn to answer them with function composition. Our findings show that measurement of capacity scaling can complement existing interpretability methods, though there are challenges in using it for this purpose.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper demonstrates that transformer models replicate factual information inefficiently when processing two-hop reasoning tasks.
  • The methodology leverages information content scaling to validate a two-function composition hypothesis in transformer architectures.
  • The study finds that chain-of-thought training significantly enhances model generalization on unseen two-hop queries.

Analyzing Information Content Scaling in Two-Hop Reasoning with Transformers

The paper "Examining Two Hop Reasoning Through Information Content Scaling" by Johnston and Belrose presents an empirical investigation into the ability of transformer models to reason over two-hop questions and answers within machine learning paradigms. The specific concern addressed is the extent to which transformer models can apply compositional reasoning to solve queries requiring two distinct factual lookups, such as answering questions of the form "Who is Bob's mother's boss?" The paper is situated in the broader context of assessing transformer architectures' capabilities beyond simple factual memorization, with a focus on understanding how these models handle more abstract reasoning tasks through scaling observations.

Methodological Approach

The authors examine how transformers trained on synthetic datasets of two-hop question and answer pairs manage these tasks as model size increases. The paper leverages the concept of information content scaling, initially introduced by \citet{allen-zhuPhysicsLanguageModels2024}, which posits that transformers can encapsulate approximately 2 bits of knowledge per parameter. The research hypothesizes that handling two-hop reasoning tasks involves learning each fact twice due to the architecture's forward nature. In contrast, tasks involving a chain of thought—where information from previous steps can recursively inform current steps—may not necessitate duplicative learning.

Results and Key Findings

The experiment findings broadly validate the hypothesis on two-function composition, specifically, that transformers exhibit better fit to capacity scaling predictions when they're forced to replicate factual data across network layers to answer two-hop queries without a chain of thought. Key results include:

  • Information Content: The paper reaffirms prior findings that a transformer's knowledge capacity is around 2 bits per parameter for simple factual tasks. In two-hop tasks without a chain of thought, scaling models demonstrate a distinct stepping pattern in their information content, suggesting the model learns a compositional function but inefficiently replicates information handling.
  • Scaling Behavior: With increased parameter count, transformers appear to individually memorize answers or engage inefficient mechanisms when the good generalizing solutions have not been learned due to resource constraints.
  • Generalization: Models trained without chain-of-thought sequences fail to generalize well to unseen two-hop questions without overlapping components present during training, in line with the two-function composition algorithm predictions.
  • Efficiency with Chain-of-Thought: Training with explicit reasoning or chain-of-thought greatly enhances transformer performance on two-hop reasoning tasks, aligning observed scaling properties closer to recurrent compositional capabilities.

Implications and Future Prospects

The implications of these findings are twofold. Practically, the results suggest that enhancing transformer models with reasoning paths akin to chains of thought can significantly boost their capacity for complex reasoning tasks. Theoretically, the research deepens understanding of how neural architectures might harness compositional reasoning—an ability closely aligned with human problem-solving strategies.

Moreover, the paper opens avenues for future research on interpretability. By further refining the measurement of information content, researchers can engage in more robust explorations of architectural constraints that dictate how machine learning models preprocess and handle complex data requirements. This paper underscores the potential limits of non-recurrent, feed-forward architectures in mimicking processes comparable to human cognitive reasoning, potentially driving interest towards alternative architectures or augmented architectures allowing for recursive data processing.

Overall, Johnston and Belrose's work contributes a crucial empirical perspective on optimizing model training for complex reasoning. By dissecting transformer behavior with a lens on information efficiency, the paper lays valuable groundwork for optimizing machine learning models' application in domains that demand deeper contextual and multi-faceted reasoning capabilities.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube