Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable Chain of Thoughts via Elastic Reasoning (2505.05315v2)

Published 8 May 2025 in cs.LG, cs.AI, and cs.CL

Abstract: Large reasoning models (LRMs) have achieved remarkable progress on complex tasks by generating extended chains of thought (CoT). However, their uncontrolled output lengths pose significant challenges for real-world deployment, where inference-time budgets on tokens, latency, or compute are strictly constrained. We propose Elastic Reasoning, a novel framework for scalable chain of thoughts that explicitly separates reasoning into two phases--thinking and solution--with independently allocated budgets. At test time, Elastic Reasoning prioritizes the completeness of solution segments, significantly improving reliability under tight resource constraints. To train models that are robust to truncated thinking, we introduce a lightweight budget-constrained rollout strategy, integrated into GRPO, which teaches the model to reason adaptively when the thinking process is cut short and generalizes effectively to unseen budget constraints without additional training. Empirical results on mathematical (AIME, MATH500) and programming (LiveCodeBench, Codeforces) benchmarks demonstrate that Elastic Reasoning performs robustly under strict budget constraints, while incurring significantly lower training cost than baseline methods. Remarkably, our approach also produces more concise and efficient reasoning even in unconstrained settings. Our code has been made available at https://github.com/SalesforceAIResearch/Elastic-Reasoning.

Summary

Scalable Chain of Thoughts via Elastic Reasoning: An Analytical Overview

The paper entitled "Scalable Chain of Thoughts via Elastic Reasoning" presents a novel approach to address the challenges in deploying Large Reasoning Models (LRMs) in environments where inference-time budgets on tokens, latency, or compute resources are strictly constrained. The research highlights a significant advancement in accommodating these constraints while retaining effectiveness in generating extended chains of thought (CoT) for complex tasks such as mathematical and programming problem-solving.

Key Contributions

The authors introduce Elastic Reasoning, a robust and scalable solution that focuses on segregating the reasoning process into two distinct phases: thinking and solution. This separation allows for independently allocated budgets for each phase, thus offering enhanced control over inference-time resources. The paper details a new training method known as budget-constrained rollout strategy, integrated into GRPO, to teach LRMs adaptive reasoning under truncated thinking processes.

Results and Findings

Empirical evaluations demonstrate that Elastic Reasoning delivers notable performance improvements under constrained budgets across benchmarks such as AIME, MATH500, LiveCodeBench, and Codeforces. Some strong numerical results include:

  • The E1-Math-1.5B model achieves 35.0% accuracy on AIME2024, outperforming L1-Max (27.1%) and L1-Exact (24.2%).
  • The E1-Code-14B model achieved a Codeforces rating of 1987, securing a 96.0 percentile position, comparable to O1-2024-12-17 (Low) with a rating of 1991.

Moreover, the models exhibit significant reductions in token usage—over 30% on various datasets—while maintaining or even enhancing performance. These results underline the efficacy of Elastic Reasoning in achieving concise and efficient reasoning paths.

Theoretical and Practical Implications

The framework proposed offers practical advantages for deploying LRMs in real-world applications constrained by resource availability, facilitating more reliable outputs without compromising solution quality. Theoretically, it emphasizes the importance of structuring the inference process into distinct phases to better manage computational budgets. This insight could inspire future research into optimizing reasoning models further by exploring phase-oriented strategies for reasoning adaptation.

Speculations on Future Developments

The successful implementation of Elastic Reasoning opens avenues for exploring its application in broader AI domains. Potential expansions could involve more dynamic budget allocations in response to live conditions or real-time feedback. Additionally, integration with other adaptive strategies such as test-time optimization could yield further efficiency gains in LLMs.

Conclusion

Overall, the paper introduces pivotal advancements in managing inference-time budgets for LRMs, retaining a high degree of solution reliability and robustness. Elastic Reasoning stands out as a noteworthy development in the landscape of reasoning model optimization, offering scalable and flexible solutions for modern AI challenges.

Youtube Logo Streamline Icon: https://streamlinehq.com