Mechanism behind context-induced reasoning shift in LLMs
Investigate the mechanism by which different prompt context conditions—specifically long irrelevant prefixes (Long input setup), multiple independent problems within a single prompt (Subtask setup), and multi-turn chat histories (Multi-turn setup)—cause large language models operating in thinking mode to generate shorter Chain-of-Thought traces and to reduce self-verification and uncertainty-management behaviors compared to solving the same problems in isolation (Baseline setup).
References
We leave a deeper analysis of the mechanism behind this shift for future work.
— Reasoning Shift: How Context Silently Shortens LLM Reasoning
(2604.01161 - Rodionov, 1 Apr 2026) in Section 4 (Analysis), end of section