Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 158 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 74 tok/s Pro
Kimi K2 199 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Humans Perceive Wrong Narratives from AI Reasoning Texts (2508.16599v2)

Published 9 Aug 2025 in cs.HC, cs.AI, and cs.CL

Abstract: A new generation of AI models generates step-by-step reasoning text before producing an answer. This text appears to offer a human-readable window into their computation process, and is increasingly relied upon for transparency and interpretability. However, it is unclear whether human understanding of this text matches the model's actual computational process. In this paper, we investigate a necessary condition for correspondence: the ability of humans to identify which steps in a reasoning text causally influence later steps. We evaluated humans on this ability by composing questions based on counterfactual measurements and found a significant discrepancy: participant accuracy was only 29%, barely above chance (25%), and remained low (42%) even when evaluating the majority vote on questions with high agreement. Our results reveal a fundamental gap between how humans interpret reasoning texts and how models use it, challenging its utility as a simple interpretability tool. We argue that reasoning texts should be treated as an artifact to be investigated, not taken at face value, and that understanding the non-human ways these models use language is a critical research direction.

Summary

  • The paper reveals that humans misinterpret AI-generated reasoning texts, with an accuracy of only about 29% in identifying true causal dependencies.
  • It employs counterfactual analysis to systematically compare human-selected narrative steps with the model’s computational process.
  • The study underscores the need for advanced interpretability frameworks that bridge the gap between human intuition and machine logic.

Humans Perceive Wrong Narratives from AI Reasoning Texts

Introduction

This paper investigates the discrepancy between human interpretations of AI-generated reasoning texts and the actual computational processes utilized by the models. The research questions the utility of reasoning texts as transparent interpretability tools, revealing a significant gap between human perception and the model's internal operations. The authors conducted experiments to evaluate human capability in identifying true causal dependencies in reasoning texts, revealing an alarming lack of accuracy which suggests a compelling need for new methodologies in AI interpretability.

Research Methodology

The research involves human participants tasked with identifying causal dependencies within reasoning texts produced by large reasoning models. Using counterfactual analysis, the paper assesses whether removal of certain steps alters the outcome of the reasoning, drawing a line between perceived and actual dependencies. Participants are tested on their ability to select causally influential steps among multiple options. Figure 1

Figure 1: Illustration of human interpretation vs the actual causal relation in AI generated reasoning text.

The causal dependencies are determined systematically by evaluating changes in reasoning outcomes when individual steps are omitted. Measurements indicate profound discrepancies between human interpretations and the model's utilization of reasoning texts.

Results and Analysis

Participants performed poorly in identifying accurate causal dependencies, with average accuracy hovering around 29%, slightly above random chance. This highlights a systemic misunderstanding of the model's process. Despite high agreement on perceived narratives among participants in certain cases, these shared interpretations often deviated from the model's actual process, reinforcing the unreliability of collective human intuition in this context. Figure 2

Figure 2: Distribution of individual narrative interpretation accuracy shows universal difficulty in interpreting AI reasoning texts.

Figure 3

Figure 3: Analysis of collective narrative accuracy indicates that higher participant agreement does not correlate with correct understanding.

Reasoning texts produced by different models exhibited variance in interpretability; however, the fundamental gap persisted across models, accentuating the challenge of accurately interpreting AI reasoning processes. The analysis establishes that the structural mismatch is not due to individual cognitive differences but rather an intrinsic discrepancy between human narrative frameworks and machine logic.

Implications

The findings suggest substantial limitations in using AI-generated reasoning texts as interpretability tools. The results challenge current assumptions about transparency in AI processes, urging a reconsideration of reasoning texts as reliable explanatory artifacts. Furthermore, the paper points to a novel understanding of language as employed by AI systems, necessitating reconsideration of how language facilitates communication between humans and AI.

The paper implies that more sophisticated methods are necessary to bridge the interpretative gap, perhaps involving advanced techniques that systematically decode the computational logic of models beyond surface-level text analysis. The research opens pathways for further studies on developing robust frameworks for AI interpretability that align with human cognitive constructs.

Conclusion

In conclusion, this paper highlights a fundamental disconnect in the way humans interpret AI-generated reasoning texts vs. the actual operations undertaken by the models. These findings spotlight crucial deficiencies in current interpretability approaches and call for advancements in methodologies that consider the foreign, albeit effective, linguistic constructs of AI systems. Bridging this interpretative gap is essential for enhancing trust and collaboration in human-AI interactions, thus requiring focused efforts in developing nuanced interpretability frameworks. Figure 4

Figure 4: Test interface showcasing the experimental setup used for participant analysis.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Explain it Like I'm 14

A simple explanation of “Humans Perceive Wrong Narratives from AI Reasoning Texts”

1. What is this paper about?

Some new AI systems write out their “thinking” step by step before giving an answer. People often read these steps and believe they show how the AI actually figured things out. This paper asks a big question: do humans really understand these “reasoning texts” the way the AI actually uses them?

The short answer the paper finds: not really. People often get the story wrong.

2. What questions did the researchers ask?

They focused on one important, easy-to-check piece of understanding: can people tell which earlier sentence in the AI’s reasoning actually caused a later sentence to be written?

In other words, given a target sentence in the AI’s step-by-step text, can you pick the earlier sentence that truly influenced it? If people can’t even do that, it’s a warning sign that the “reasoning text” isn’t a clear window into the AI’s real process.

3. How did they paper it?

Think of the AI’s reasoning text like a chain of dominoes—one sentence after another. To test what really matters:

  • They took AI-generated reasoning for simple math word problems (things like totaling distances or costs).
  • They split the reasoning into sentences.
  • For a chosen target sentence, they tested each earlier sentence one by one by “removing” it and asking the AI to regenerate just the target sentence.
    • If removing sentence X makes the target sentence change in meaning, then sentence X truly caused (influenced) the target sentence.
    • If the target doesn’t change, then sentence X didn’t matter for that target.
  • They did this carefully so the only difference was the missing sentence (no randomness).
  • They used another AI to double-check whether the original and new target sentences really said the same thing or not.

Then they made a human quiz:

  • People saw the problem and the AI’s reasoning up to a target sentence.
  • They saw four earlier sentences and had to pick which single one actually caused the target sentence.
  • There was always exactly one correct choice.
  • 80 adult participants took 50 such questions each.

This is like asking: which domino, if removed, would change how a later domino falls?

4. What did they find, and why does it matter?

Key results:

  • Individuals did poorly: average accuracy was about 29%, barely above guessing (25%).
  • Even when many people agreed on the same answer (a strong “shared story”), they were still wrong a lot: only about 40% correct.
  • Background and experience didn’t help: people with STEM degrees, people who use AI often, or people who took more time didn’t do better.
  • The gap showed up across different AI models. People did a bit better on one model’s texts than the other, but overall performance was still low.

Why this matters:

  • It shows a big mismatch between the story humans think the AI is following and the way the AI actually uses its own text.
  • That means we shouldn’t treat AI “reasoning text” as a trustworthy explanation of how the AI got its answer just by reading it like a human explanation.

5. What does this mean for the future?

  • Don’t take AI “thinking” text at face value. It may look like an explanation, but humans can easily misread it.
  • We need better tools and tests to probe what parts of the AI’s text actually affect later steps and the final answer.
  • AIs might “use” language differently from humans. They can write fluent, logical-looking text, but the hidden cause-and-effect inside their process doesn’t match human expectations.
  • Building safe, trustworthy AI will require new ways to paper and interpret these systems, not just reading their step-by-step text as if it were a human diary.
Dice Question Streamline Icon: https://streamlinehq.com
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 3 tweets and received 149 likes.

Upgrade to Pro to view all of the tweets about this paper: