Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 94 tok/s
Gemini 2.5 Pro 37 tok/s Pro
GPT-5 Medium 33 tok/s
GPT-5 High 35 tok/s Pro
GPT-4o 92 tok/s
GPT OSS 120B 441 tok/s Pro
Kimi K2 227 tok/s Pro
2000 character limit reached

AdapThink: Adaptive Thinking Preferences for Reasoning Language Model (2506.18237v1)

Published 23 Jun 2025 in cs.LG, cs.AI, and cs.CL

Abstract: Reinforcement Learning (RL)-based post-training has significantly advanced the complex reasoning capabilities of LLMs, fostering sophisticated self-reflection processes. However, this ``slow thinking'' paradigm presents a critical challenge to reasoning efficiency: models may expend excessive computation on simple questions and shift reasoning prematurely for complex ones. Previous mechanisms typically rely on static length budgets or predefined rules, lacking the adaptability for varying question complexities and models' evolving capabilities. To this end, we propose AdapThink, an adaptive post-training framework designed to induce more efficient thinking while maintaining the performance of reasoning LLMs. Specifically, AdapThink incorporates two key mechanisms: 1) A group-relative reward function that leverages model confidence and response's characteristic to dynamically adjust the preference of reflection-related transition words without resorting to a fixed length preference. 2) A diversity-aware sampling mechanism that balances the training group's solution accuracy with reasoning diversity via an entropy-guided score. Experiments on several mathematical reasoning datasets with DeepSeek-distilled models demonstrate AdapThink's advantages in enabling adaptive reasoning patterns and mitigating the inefficiencies.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces an adaptive RL framework that modulates reasoning depth based on group-level confidence signals to enhance efficiency and accuracy.
  • It employs a two-stage sampling process that generates diverse reasoning paths while balancing redundant reflection with necessary exploration.
  • Empirical results demonstrate a 27% improvement in PASS@1 and a 15.6% token reduction, confirming the method's efficiency and robustness against reward hacking.

AdapThink: Adaptive Thinking Preferences for Efficient Reasoning in LLMs

AdapThink introduces a post-training reinforcement learning (RL) framework that directly addresses the inefficiencies inherent in contemporary "slow thinking" paradigms employed by reasoning LLMs. In existing RL-based post-training procedures, LLMs frequently overthink simple queries—expending superfluous computation through excessive self-reflection—while underthinking complex ones, causing premature answer termination and decreased accuracy. AdapThink seeks to achieve adaptive, capability-aware reasoning patterns by moving beyond static, rule-based approaches to controlling CoT (Chain-of-Thought) length, providing both efficient and accurate reasoning.

Methodology Overview

The core contributions of AdapThink are twofold:

  1. Group-Relative Reward for Adaptive Reasoning Control: AdapThink introduces a reward function that dynamically modulates preference for reflection and reasoning depth. This reward leverages group-level response statistics, particularly focusing on the distribution of reflection-related transition words (e.g., "wait", "check", "alternatively") in groups of sampled answers. Rather than penalizing or rewarding static length targets, AdapThink computes model confidence for each question over a group of samples and uses this signal to determine whether to prioritize further exploration (when confidence is low) or enforce conciseness and minimize redundant reflection (when confidence is high). The reward formulation balances token consumption, the presence of completion markers, and the frequency of branch-extension words, with adaptive weighting determined by intra-group accuracy.
  2. Diversity-Aware Sampling:

The framework emphasizes the necessity of diverse reasoning patterns within each group used for policy optimization. A two-stage sampling process is used: - Oversampling: Initially generates a large candidate sample pool, allowing for variation in chain-of-thought execution. - Entropy-based Downsampling: Selects a subset that maximizes diversity across dimensions such as output length, "Pause-Validation" word usage, and "Branch-Extension" transitions, while maintaining a prescribed number of correct and incorrect samples depending on group confidence statistics.

This dual approach allows AdapThink to adaptively promote or discourage reflection and branching based on actual model performance and question difficulty, learning distinct patterns for handling simple versus complex reasoning instances.

Empirical Evaluation

Experiments are conducted on mathematical reasoning benchmarks—including AIME, AMC, and MATH-500—using DeepSeek-R1-Distill-Qwen-1.5B as the base model. The evaluation protocol covers answer accuracy (PASS@1), reasoning efficiency (token count and reflection word statistics), and group diversity (entropy-based metrics).

Key findings include:

  • Improved Accuracy and Efficiency: AdapThink achieves 27% average improvement in PASS@1 and a 15.6% reduction in average token usage compared to the base model, outperforming various length-control and group-reward baselines.
  • Effective Reflection Word Modulation: Under AdapThink, unnecessary "Pause-Validation" and "Branch-Extension" word usage is systematically suppressed in correct answers, particularly for easy problems, reducing redundant reasoning steps without sacrificing accuracy. For complex problems, reflection is adaptively retained.
  • Robustness to Reward Hacking: N-gram repetition analysis reveals low levels of degenerative repetition in AdapThink-trained responses, indicating resistance to simple reward hacking by excessive repetition of particular wording patterns—a known failure mode for some length penalty-based RL algorithms.

Ablations and Analysis

The thorough ablation paper highlights the importance of each AdapThink component:

  • Removing length or branch-extension control terms from the reward substantially degrades performance and leads to inefficient reflection word usage and longer outputs.
  • Diversity-aware sampling significantly enhances group entropy metrics and overall sample efficiency, with oversampling by a factor of 2 yielding optimal results.
  • A curriculum learning strategy where the context length is gradually extended during post-training (e.g., 2k → 4k tokens) leads to superior accuracy and efficiency compared to direct long-context training.

Analysis of the impact of controlling different transition word categories (reflection vs. branching vs. both) shows that regulating both yields the best overall learning efficiency. Over-regularization, such as indiscriminate penalization of all reflection, degrades accuracy, underlining the necessity for adaptive, context-dependent reward assignments.

Practical Implications and Theoretical Insights

AdapThink demonstrates a scalable and practical strategy for post-training reasoning models where inference cost and answer latency are critical, such as embedded mathematical engines or constrained deployment environments. By modulating reasoning depth and reflection adaptively—rather than via static budgets—it aligns inference workload with actual question complexity and model uncertainty.

The framework’s foundation on group-relative statistics and sample diversity also offers robustness against the classic RL instability issues (mode collapse, reward hacking) encountered in previous RL-based LLM post-training regimes.

On a theoretical level, AdapThink provides evidence that explicit control over semantic reasoning primitives (reflection/branching markers) is a more tractable and effective lever for reasoning efficiency than reliance on raw length or token-level heuristics. This moves LLM post-training closer to curriculum-aware, introspection-based optimization strategies.

Future Prospects

The methodology’s dependency on surface-level linguistic markers (reflection/branching words) represents a limitation for capturing all semantically relevant reasoning transitions, pointing towards the need for future work on semantic-level control and deeper integration of model-internal uncertainty and thought progress tracking. Additionally, extending the adaptive reward mechanism to multimodal and broader reasoning domains remains an open direction.

AdapThink establishes a strong foundation for the next generation of efficient, capability-adaptive LLM reasoning systems, bridging the gap between maximal accuracy and practical inference budgets while highlighting the efficacy of dynamic self-reflection modulation over static token control.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.