Adaptive Mindset Selection Based on Context in LLM Reasoning

Determine how to adaptively select the most suitable cognitive mindset—such as Spatial thinking, Convergent thinking, Divergent thinking, or Algorithmic thinking—based on the task and intermediate reasoning context within large language model reasoning systems, so that the system can choose context-appropriate thinking modes during inference rather than relying on a fixed or preselected strategy.

Background

The paper frames LLM reasoning in terms of heterogeneous cognitive mindsets—Spatial, Convergent, Divergent, and Algorithmic—that serve distinct functions and require explicit orchestration. Prior work has shown that intervening on cognitive behaviors can enhance reasoning performance, but typically employs fixed strategies or task-level selection, lacking step-level adaptation to context.

This open problem highlights the need for a principled mechanism to select the appropriate mindset in a context-dependent manner during inference. The authors’ Chain of Mindset (CoM) proposes a meta-agent for dynamic orchestration, underscoring the broader challenge of defining policies or criteria that map evolving reasoning states to mindset choices.

References

These works demonstrate that intervening on cognitive behaviors can enhance reasoning, but how to adaptively select the most suitable mindset based on context remains open.

Chain of Mindset: Reasoning with Adaptive Cognitive Modes  (2602.10063 - Jiang et al., 10 Feb 2026) in Appendix: Related Work, Subsection "Cognitive Behaviors in LLM Reasoning"