Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems (2504.09037v1)

Published 12 Apr 2025 in cs.AI and cs.CL

Abstract: Reasoning is a fundamental cognitive process that enables logical inference, problem-solving, and decision-making. With the rapid advancement of LLMs, reasoning has emerged as a key capability that distinguishes advanced AI systems from conventional models that empower chatbots. In this survey, we categorize existing methods along two orthogonal dimensions: (1) Regimes, which define the stage at which reasoning is achieved (either at inference time or through dedicated training); and (2) Architectures, which determine the components involved in the reasoning process, distinguishing between standalone LLMs and agentic compound systems that incorporate external tools, and multi-agent collaborations. Within each dimension, we analyze two key perspectives: (1) Input level, which focuses on techniques that construct high-quality prompts that the LLM condition on; and (2) Output level, which methods that refine multiple sampled candidates to enhance reasoning quality. This categorization provides a systematic understanding of the evolving landscape of LLM reasoning, highlighting emerging trends such as the shift from inference-scaling to learning-to-reason (e.g., DeepSeek-R1), and the transition to agentic workflows (e.g., OpenAI Deep Research, Manus Agent). Additionally, we cover a broad spectrum of learning algorithms, from supervised fine-tuning to reinforcement learning such as PPO and GRPO, and the training of reasoners and verifiers. We also examine key designs of agentic workflows, from established patterns like generator-evaluator and LLM debate to recent innovations. ...

A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems

The paper "A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems" sheds light on the evolving landscape of reasoning capabilities within LLMs. The authors take a comprehensive approach to categorize existing methods along two orthogonal dimensions: regimes and architectures. The paper thoroughly examines the state at which reasoning is achieved during inference or dedicated training and the components involved in the reasoning process across standalone LLMs and agentic systems.

Core Components and Findings

Reasoning Regimes

  1. Inference Scaling: The paper reviews methods that enhance reasoning at inference time without modifying model parameters. Chain-of-Thought (CoT) techniques, prompt optimization, and search strategies are highlighted as effective ways to augment test-time computation. Notably, a significant revelation from OpenAI's advancements in inference scaling indicates a promising future for reasoning without scaling model parameters.
  2. Learning to Reason: This paradigm shifts reasoning improvements to the training phase, where models are specifically trained to reason before deployment. The paper discusses a spectrum of learning algorithms like supervised fine-tuning and reinforcement learning strategies such as PPO and GRPO. DeepSeek-R1 serves as a notable example, leveraging reinforcement learning with minimal computational resources.

System Architecture

  1. Standalone LLMs vs. Agentic Systems: The paper contrasts standalone LLM reasoning with agentic systems that incorporate tools and multi-agent collaborations. Agentic systems are characterized by interactiveness and autonomy, exhibiting enhanced reasoning with the aid of external environments.
  2. Single-Agent and Multi-Agent Systems: Strategies within single-agent frameworks include the integration of external tools and dynamic adaptation based on task-specific requirements. In multi-agent setups, coordinated communication and debate patterns ensure robust problem-solving capabilities.

Practical and Theoretical Implications

The survey anticipates a trajectory where reasoning becomes integral to achieving Artificial General Intelligence, advancing beyond conventional AI systems. The implications are profound, with models potentially autonomously engaging in complex tasks requiring logical inference and decision-making.

Moreover, the nuanced exploration of both theoretical and empirical analyses of reasoning reveals substantial challenges and opportunities. For instance, evaluating reasoning beyond outcome correctness remains a pivotal challenge, and innovations in evaluation metrics are called for to gauge the reasoning process effectively.

Future Directions

The paper suggests promising developments in adaptive training strategies that dynamically distribute computational costs between training and inference. Additionally, the refinement of communication protocols within agentic systems is poised to enhance reasoning performance significantly.

The authors also spotlight emerging trends toward domain-specific reasoning systems, signaling a shift where models might specialize in areas like mathematical reasoning, code generation, or strategic reasoning within multi-agent games.

In conclusion, this survey offers AI researchers and practitioners a robust foundation for pursuing further research into reasoning in LLMs, integrating theoretical insights with empirical advancements. It effectively outlines crucial pathways to developing sophisticated and reliable AI systems equipped with profound reasoning capabilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Zixuan Ke (26 papers)
  2. Fangkai Jiao (19 papers)
  3. Yifei Ming (26 papers)
  4. Xuan-Phi Nguyen (22 papers)
  5. Austin Xu (12 papers)
  6. Do Xuan Long (7 papers)
  7. Minzhi Li (8 papers)
  8. Chengwei Qin (28 papers)
  9. Peifeng Wang (14 papers)
  10. Silvio Savarese (200 papers)
  11. Caiming Xiong (337 papers)
  12. Shafiq Joty (187 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com