Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When to Continue Thinking: Adaptive Thinking Mode Switching for Efficient Reasoning (2505.15400v2)

Published 21 May 2025 in cs.AI and cs.CL

Abstract: Large reasoning models (LRMs) achieve remarkable performance via long reasoning chains, but often incur excessive computational overhead due to redundant reasoning, especially on simple tasks. In this work, we systematically quantify the upper bounds of LRMs under both Long-Thinking and No-Thinking modes, and uncover the phenomenon of "Internal Self-Recovery Mechanism" where models implicitly supplement reasoning during answer generation. Building on this insight, we propose Adaptive Self-Recovery Reasoning (ASRR), a framework that suppresses unnecessary reasoning and enables implicit recovery. By introducing accuracy-aware length reward regulation, ASRR adaptively allocates reasoning effort according to problem difficulty, achieving high efficiency with negligible performance sacrifice. Experiments across multiple benchmarks and models show that, compared with GRPO, ASRR reduces reasoning budget by up to 32.5% (1.5B) and 25.7% (7B) with minimal accuracy loss (1.2% and 0.6% pass@1), and significantly boosts harmless rates on safety benchmarks (up to +21.7%). Our results highlight the potential of ASRR for enabling efficient, adaptive, and safer reasoning in LRMs.

Summary

  • The paper introduces Adaptive Self-Recovery Reasoning (ASRR), a framework to optimize efficiency in Large Reasoning Models by dynamically adjusting reasoning length based on task complexity.
  • The study identifies an intrinsic "Internal Self-Recovery Mechanism" where models spontaneously continue reasoning, which ASRR leverages with an accuracy-aware length reward regulation.
  • ASRR significantly reduces reasoning overhead (up to 32.5%) while maintaining high accuracy and enhances harmlessness scores, promoting efficient and potentially safer AI deployments.

Adaptive Thinking Mode Switching for Efficient Reasoning in Large Reasoning Models

The paper "When to Continue Thinking: Adaptive Thinking Mode Switching for Efficient Reasoning" explores the optimization of reasoning processes in Large Reasoning Models (LRMs). Despite their advanced capabilities, these models are subject to inefficiencies due to unnecessary extended reasoning, which occur notably during simple tasks. The work systematically analyzes reasoning behaviors, identifies key phenomena, and introduces a new framework, Adaptive Self-Recovery Reasoning (ASRR), to optimize reasoning length.

The paper initiates with a critical examination of LRMs under two distinct reasoning modes: the Long-Thinking mode, which engages in comprehensive reasoning chains, and the No-Thinking mode, which suppresses explicit reasoning. This effort reveals an intrinsic "Internal Self-Recovery Mechanism" in models, wherein they spontaneously continue reasoning during answer generation despite imposed constraints. Leveraging this insight, ASRR is developed to dynamically manage reasoning lengths based on task complexity through an accuracy-aware length reward regulation mechanism.

Quantitative results are presented to evaluate ASRR's impact across multiple benchmarks and models. The ASRR framework significantly reduces reasoning overhead by 32.5% and 25.7% for models sized 1.5B and 7B, respectively, while maintaining high reasoning accuracy with minimal performance cost (a loss of only 1.2% and 0.6% in pass@1 accuracy). Moreover, ASRR enhances harmless rate scores on safety benchmarks, suggesting the potential for safer deployments in real-world applications.

Several observations are critical to understanding this paper:

  • Internal Self-Recovery Mechanism: This behavioral pattern is identified as models occasionally self-supplement reasoning steps during the No-Thinking mode, indicating a latent ability for difficulty perception and dynamic budget allocation.
  • Adaptive Self-Recovery Reasoning (ASRR): By judiciously suppressing redundant reasoning in simple tasks and enabling implicit recovery where necessary, ASRR adapts to varying task complexities. The introduction of an adaptable length penalty based on performance levels ensures efficiency without sacrificing accuracy.
  • Implications: ASRR contributes to efficient computation in LRMs, promoting adaptive reasoning and safer outputs, which is pivotal given the growing deployment of these models in sensitive domains.

While the findings promisingly advance efficient reasoning, this research opens pathways for further exploration into automated threshold tuning systems and broader applications across diverse LLM architectures. This adaptability and efficiency could catalyze advances in scalable AI systems capable of managing computational loads in resource-constrained environments.

Conclusively, the ASRR framework holds potential not only for computational efficiency but also for establishing a safer alignment in reasoning models. This aligns well with the increasing demand for practical and reliable AI solutions in dynamic and advanced tasks. Future exploration could focus on integrating ASRR principles into larger-scale models and various AI applications, driving both reasoning efficiency and robustness.