Papers
Topics
Authors
Recent
2000 character limit reached

One Model, Two Minds: A Context-Gated Graph Learner that Recreates Human Biases (2509.08705v1)

Published 10 Sep 2025 in cs.AI

Abstract: We introduce a novel Theory of Mind (ToM) framework inspired by dual-process theories from cognitive science, integrating a fast, habitual graph-based reasoning system (System 1), implemented via graph convolutional networks (GCNs), and a slower, context-sensitive meta-adaptive learning system (System 2), driven by meta-learning techniques. Our model dynamically balances intuitive and deliberative reasoning through a learned context gate mechanism. We validate our architecture on canonical false-belief tasks and systematically explore its capacity to replicate hallmark cognitive biases associated with dual-process theory, including anchoring, cognitive-load fatigue, framing effects, and priming effects. Experimental results demonstrate that our dual-process approach closely mirrors human adaptive behavior, achieves robust generalization to unseen contexts, and elucidates cognitive mechanisms underlying reasoning biases. This work bridges artificial intelligence and cognitive theory, paving the way for AI systems exhibiting nuanced, human-like social cognition and adaptive decision-making capabilities.

Summary

  • The paper presents a dual-process AI model that integrates fast GCN-based habitual reasoning with a slow, meta-adaptive system.
  • It introduces a context gate mechanism to dynamically balance between intuitive and deliberative processing based on cognitive load.
  • Empirical results on false-belief tasks demonstrate the model's ability to generalize and mitigate cognitive biases like anchoring and framing.

One Model, Two Minds: A Context-Gated Graph Learner that Recreates Human Biases

Introduction

This paper introduces an innovative Theory of Mind (ToM) framework, drawing from dual-process cognitive theories to enhance AI systems. The framework integrates a fast, habitual system implemented via Graph Convolutional Networks (GCNs) and a slower, context-sensitive meta-adaptive learning system driven by meta-learning techniques. The central concept is a dynamic balance between intuitive and deliberative reasoning facilitated through a learned context gate. The paper systematically validates this architecture on canonical false-belief tasks and explores its efficacy in mirroring cognitive biases such as anchoring, cognitive-load fatigue, framing effects, and priming effects. Through empirical results, the work demonstrates that this dual-process model aligns with human adaptive behavior, achieving impressive generalization capabilities and elucidating cognitive mechanisms that underlie reasoning biases. Figure 1

Figure 1: OM2M model pipeline overview depicting the dual-process representation for belief inference.

Methodology

The OM2M framework employs a hybrid neural model for Theory of Mind reasoning, synthesizing both fast, habitual inference and slow, context-sensitive adaptation. The model is uniquely designed to replicate human cognitive processes, leveraging a dual-process architecture.

System 1: Graph-Based Habitual Reasoner

System 1 is a GCN-based mechanism responsible for encoding social scenarios into graph structures representing agents, objects, and locations. Each node is associated with a feature vector, and the GCN processes these along with agent-specific meta-vectors to achieve efficient, low-variability inferences typical of fast, habitual reasoning.

System 2: Meta-Adaptive Controller

System 2 consists of a multi-layer perceptron (MLP) that supplements the rapid decisions of the GCN with slower, deliberative computation. This meta-controller dynamically rewrites GCN parameters to accommodate context-sensitive reasoning, significantly improving adaptability in fluctuating environments.

Contextual Gate

The introduction of a contextual gate enables the model to fluidly transition between System 1 and System 2 outputs. This gating mechanism is sensitive to cognitive load, surprise, and framing, allowing for flexible arbitration between rapid habitual responses and in-depth deliberative analysis.

Training and Evaluation

The training process is bifurcated, initially pretraining System 1 for habitual inference before jointly training System 2 and the contextual gate on both routine and cognitively taxing scenarios. This dual-phase training ensures the model's versatility across various contexts.

Experiments

The comprehensive evaluation involves diverse computational simulations designed to assess aspects of human reasoning and cognitive control. The model is adept at robust generalization on novel false-belief tasks, demonstrating a remarkable approximation to human-like Theory of Mind processes. Figure 2

Figure 2: Relational graph representation of the Sally-Anne Theory of Mind task.

Results

Generalization and Bias Mitigation

The experiments reveal the model's ability to generalize ToM reasoning and demonstrate its susceptibility and subsequent correction of cognitive biases. In particular, the contextual gate plays a pivotal role in balancing between automatic and deliberative processing.

Anchoring and Priming

In anchoring tasks, System 1 exhibits strong bias, which System 2 subsequently overrides through strategic gating involvement. One-shot priming experiments further validate the model's capacity for ephemeral belief revision. Figure 3

Figure 3: One-shot priming effect reflecting transient memory capacity in the model.

Cognitive Load Effects

Increasing cognitive load reduces System 2 engagement, confirming dual-process theory predictions where resource constraints transition control from deliberative to habitual processes. Figure 4

Figure 4: Cognitive load impact on inference showing reduced System 2 engagement.

Framing Effects

Frame cues, despite constant factual inputs, lead to substantial inference shifts, influenced by the learned gate's modulation of System 2 activity, illustrating the model's susceptibility to framing effects without altering factual evidence.

Conclusion

The OM2M architecture captures essential dual-process aspects of human-like reasoning within a coherent neural framework. Its demonstrated ability to adaptively replicate human biases and achieve robust generalization marks a significant stride toward developing socially intelligent AI. Future work will extend this model to more dynamic multi-agent environments, addressing richer cognitive demands to refine human-aligned machine reasoning further.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.