Papers
Topics
Authors
Recent
2000 character limit reached

BigToM Forward Belief Scenarios

Updated 21 November 2025
  • BigToM Forward Belief Scenarios are systematically controlled ToM tasks that assess first-order belief reasoning in artificial agents.
  • They employ formal causal templates and diverse inference methods, including Bayesian updates and regression approaches, to simulate belief dynamics.
  • Empirical results show that while models like GPT-4 achieve near-ceiling performance, smaller LLMs exhibit significant biases in belief attribution.

BigToM Forward Belief Scenarios are a class of systematically controlled theory-of-mind (ToM) reasoning tasks designed to probe first-order belief tracking in artificial agents, particularly LLMs. These scenarios operationalize the classic false-belief paradigm within computational and neural frameworks, providing a rigorous platform for benchmarking belief attribution, stepwise uncertain inference, and belief dynamics over multi-agent and multimodal environments. BigToM forward belief scenarios have become a key empirical and methodological touchstone for evaluating and developing ToM capabilities in LLMs and related planning systems.

1. Formal Definition and Structure of Forward Belief Scenarios

A BigToM forward belief scenario consists of a narrative world WW and an agent AA embedded in a sequence of discrete events E1,E2,,EkEk+1E_1, E_2, \dots, E_k \to E_{k+1}, where EkE_k is the last event observed by AA and Ek+1E_{k+1} is a state-changing event not observed by AA. The agent’s belief at time tt is expressed as the subjective probability

bA(t)=PrA[W=WLObservationsA up to t]b_A(t) = \Pr_A[W = W^L \mid \text{Observations}_A \ \text{up to}\ t]

At time k+1k+1, after an unobserved event causes the true world state to shift from W0W_0 to W1 (W0)W_1\ (\neq W_0), the scenario probes: "What does agent AA believe about the world?" Formally, the task is to compute bA(k+1)b_A(k+1), recognizing that AA's belief remains anchored to the last observed state W0W_0 despite the actual world changing to W1W_1 (Chulo et al., 19 Nov 2025, Gandhi et al., 2023).

Each BigToM scenario is a tuple:

  • Story: Controlled narrative instantiating abstract causal relations.
  • Question: Queries the agent’s belief after an unobserved event.
  • Choices: Typically binary, e.g. does AA believe pp or ¬p\neg p?

This framework isolates pure belief state inference and enables tightly controlled experimental conditions over perceptual access and event salience.

2. Task Generation via Causal Templates

BigToM scenarios are constructed using formal causal templates to specify the syntactic and semantic scaffolding of ToM reasoning (Gandhi et al., 2023). The main variables are:

  • CC: Context (agent, setting)
  • DD: Desire (agent’s goal)
  • P0P_0, B0B_0: Initial percept and derived belief
  • EE: Exogenous causal event (world-altering)
  • P1P_1: Percept of EE (or its absence)
  • B1B_1: Agent’s post-event belief
  • A1A_1: Subsequent action

The causal template induces the following dependency graph:

P0B0,E(world state change),P1B1,(B1,D)A1P_0 \rightarrow B_0, \quad E \rightarrow (\text{world state change}), \quad P_1 \rightarrow B_1, \quad (B_1, D) \rightarrow A_1

Concrete BigToM items are synthesized by sampling narrative parameters with LLMs such as GPT-4, enforcing constraints that ensure the intended inference is controlled (e.g., by specifying whether the agent perceives the causal event). This procedural generation supports thousands of evaluation items with high linguistic and situational diversity, matching expert quality according to blinded human raters (Gandhi et al., 2023).

3. Computational Models and Inference Mechanisms

Multiple architectures and paradigms implement forward belief reasoning within the BigToM framework:

a. Scalable Bayesian ToM Planning

A stepwise Bayesian update decomposes the belief inference task. At each time step tt, the agent’s belief btb_t is a probability distribution P(so1:t)P(s\mid o_{1:t}) over hidden states ss, recursively updated by:

  • Prediction: P(bto1:t1)=bt1P(btbt1,st)P(bt1o1:t1)P(b_t\mid o_{1:t-1}) = \sum_{b_{t-1}} P(b_t\mid b_{t-1}, s_t)\,P(b_{t-1}\mid o_{1:t-1})
  • Correction: P(bto1:t)P(otbt)P(bto1:t1)P(b_t\mid o_{1:t}) \propto P(o_t\mid b_t)\,P(b_t\mid o_{1:t-1})

A weak-to-strong control architecture integrates ToM-specialized likelihoods from small LMs with context-rich priors from large LMs, delivering strong generalization and scalability in rich multimodal domains (Zhang et al., 2 Jun 2025).

Example Forward Scenario Calculation

Step Observation Updated Belief btb_t
$0$ Uniform over nn cabinets b0b_0 = Uniform({cab1,,cabn}\{cab_1,\dots,cab_n\})
$1$ cab1cab_1 is empty b1b_1: Zero cab1cab_1, re-normalize over {cab2,}\{cab_2,\dots\}
$2$ cab3cab_3 observation b2b_2: Sharpened towards cab3cab_3

Over successive steps, the posterior converges to the true location as in a Bayesian filter (Zhang et al., 2 Jun 2025).

b. Regression Approaches

Symbolic goal regression reduces the subjective probability Bel(ϕ,do(α,S0))Bel(\phi, do(\alpha,S_0)) after a sequence of actions and observations to an initial state query Bel(R[ϕ],S0)Bel(R[\phi], S_0) via recursive term/formula regression operators, sidestepping the need to explicitly unroll the state transition and observation chain (Belle et al., 2013). For ToM and nested beliefs, each additional order introduces another set of regression fluents, allowing collapse to a single initial-state inference.

c. Feed-forward Belief Propagation

In belief networks and neural networks, forward belief propagation analytically tracks marginal means and variances through each network layer for uncertain inputs, using closed-form moment equations adapted for the activation nonlinearity (e.g., sigmoid, ReLU). This provides a differentiable and scalable means of maintaining uncertainty during forward passes, necessary for efficient ToM architectures based on deep networks (Shekhovtsov et al., 2018).

4. Evaluation, Performance Benchmarks, and Error Profiles

BigToM forward belief scenarios are central to recent ToM evaluation efforts. Key findings include:

  • Human baselines: ≈100% (True-Belief), ≈92% (False-Belief), combined ≈92% accuracy.
  • GPT-4: Near-ceiling performance in 0-shot, with forward belief (false-belief) accuracies .98–.99, but with a mild anchor bias when the agent's initial belief is reiterated (.97→.90) (Gandhi et al., 2023).
  • Smaller LLMs: Dramatically lower, e.g., LLaMA-65B: .41, text-davinci-003: .25, gpt-3.5-turbo: .31, Claude-v1.3: .59, Claude-2: .52.
  • Contrastive Activation Addition (CAA) steering: Applied to Gemma-3-4B, CAA increased accuracy from 32.5% to 46.7% on 1,000 forward-belief scenarios, an absolute gain of 14.2% (Chulo et al., 19 Nov 2025).
  • Hybrid planners (e.g., 8B→405B chain): Achieve 81.3% on multimodal benchmarks, with strong generalization to unseen domains and +4.6% improvement over prior state-of-the-art (Zhang et al., 2 Jun 2025).

Weaker models typically fail to track perceptual access, defaulting to world-state reasoning (what is true) rather than the agent’s belief state (what they think), or they anchor erroneously on initial beliefs.

5. Planning and Symbolic Formulations

Belief-space planners leverage planning graph heuristics adapted for progression in uncertain, partially observable domains (Bryce et al., 2011). Key constructs include:

  • Belief state distances (d(BS,BS)d(BS, BS')) via aggregation (max, sum, overlap).
  • Relaxed plan or level-heuristics computed via single, multiple, or labelled-uncertainty graphs (LUGs), with tracking of variable dependencies in BDD data structures.
  • Integration into A*/AO*-style planners allows scalable search and pruning in large belief spaces relevant for robotic and social ToM tasks.

Such methods generalize to hierarchical or nested ToM settings in BigToM, provided variables (actions, observations) are suitably factored or extended to account for higher-order beliefs.

6. Mechanistic Insights from Neural ToM Studies

Recent studies decompose LLM ToM improvements in forward belief scenarios by analyzing internal activations using linear probes across 45 cognitive actions (Chulo et al., 19 Nov 2025). Contrastive activation steering (CAA) was shown to:

  • Enhance emotion-related activity: emotion perception (+2.23 layers), emotion valuing (+2.20 layers).
  • Suppress analytical routines: questioning (-0.78), convergent thinking (-1.59).

The principal mediator of improved ToM performance in LLMs is increased engagement of emotional processing circuitry rather than traditional analytic reasoning. This challenges the assumption that logical chain-of-thought is the key path to social inference in LLMs; perspective-taking arises from affective mechanisms.

7. Belief Maintenance and Scenario Management Systems

Belief Maintenance Systems (BMS) generalize classical Truth Maintenance Systems (TMS) to infinite-valued logic, allowing for dynamic and reversible assertion of hypothetical facts and propagation of degree-of-belief (Falkenhainer, 2013). In the context of BigToM-style scenarios, BMS:

  • Represents each fact as a node with (b+,b)(b^+, b^-) belief degrees.
  • Utilizes invertible Dempster–Shafer mass assignments to support dynamic forward scenario evaluation ("what if X=1X=1").
  • Allows rollback to previous belief states by removing assertions and repropagating support.

This infrastructure enables fine-grained, reversible scenario analysis in forward belief tasks, providing a foundation for rich what-if explorations in ToM and planning.


BigToM forward belief scenarios operationalize first-order belief reasoning in computational contexts, offering a rigorous benchmark for ToM, planning, and neural inference methods. Their systematic structure, empirical tractability, and extensibility to multimodal and hierarchical settings make them central to ongoing progress in machine social cognition. Recent advances clarify both the statistical and mechanistic pathways underlying LLM ToM abilities and provide a roadmap for future research into scalable, robust, and transparent belief inference systems.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to BigToM Forward Belief Scenarios.