Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 75 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 98 tok/s Pro
Kimi K2 226 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

A Survey on LLM Test-Time Compute via Search: Tasks, LLM Profiling, Search Algorithms, and Relevant Frameworks (2501.10069v4)

Published 17 Jan 2025 in cs.AI

Abstract: LLM test-time compute (or LLM inference) via search has emerged as a promising research area with rapid developments. However, current frameworks often adopt distinct perspectives on three key aspects: task definition, LLM profiling, and search procedures, making direct comparisons challenging. Moreover, the search algorithms employed often diverge from standard implementations, and their specific characteristics are not thoroughly specified. This survey aims to provide a comprehensive but integrated technical review on existing LIS frameworks. Specifically, we unify task definitions under Markov Decision Process (MDP) and provides modular definitions of LLM profiling and search procedures. The definitions enable precise comparisons of various LLM inference frameworks while highlighting their departures from conventional search algorithms. We also discuss the applicability, performance, and efficiency of these methods. For ongoing paper updates, please refer to our GitHub repository: https://github.com/xinzhel/LLM-Search.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces a unified task definition based on Markov Decision Processes to compare diverse LLM-based search frameworks.
  • It modularizes LLM profiling into key components like policies, value functions, and transition models for systematic analysis.
  • The study reviews various search algorithms, emphasizing their modular integration with LLMs for enhanced inference in complex decision tasks.

A Survey on LLM Test-Time Compute via Search: Tasks, LLM Profiling, Search Algorithms, and Relevant Frameworks

The paper focuses on evaluating LLMs in the context of test-time computation through search, a rapidly emerging area in AI research. This survey addresses the challenges posed by distinct frameworks in this domain that often employ varied perspectives regarding task definition, LLM profiling, and search procedures which complicate direct comparisons. The work aims to provide a unifying review and technical definitions to facilitate precise comparisons and improve clarity in the evaluation of LLM inference frameworks.

Key Contributions

  1. Unified Task Definitions: The authors propose a unified task definition based on Markov Decision Processes (MDPs) to standardize varied task formulations in LLM-based search frameworks. This definition is extended to accommodate reasoning tasks traditionally not suited for MDP formulation, such as graph traversal and language reasoning. The MDP-based framework fundamentally allows a seamless comparison across diverse tasks and applications like code generation and web navigation.
  2. LLM Profiling and Implementation: The survey presents a modularization of LLM profiling into components common in solving MDPs - policies, value functions, and transition models. This segmentation aids in formalizing LLM profiled roles, making the subsequent analysis and comparison more systematic and thorough.
  3. Modular Search Procedures: Rather than showcasing individual frameworks, the paper emphasizes the modular and reusable aspects of search procedures, enabling effective use in various contexts. This approach minimizes redundancy and highlights each framework’s unique contribution without exploring overly specific configurations.
  4. Comparative Review of Frameworks: The paper provides a detailed comparative analysis of existing frameworks based on underlying search algorithms such as Beam Search, BFS, DFS, A*, and MCTS. The emphasis is on detailing the LLM integration into search algorithms and elucidating their divergence from traditional search methodologies.
  5. Critical Analysis of LLM-Integrated Search Methods: The paper provides a critical evaluation on several fronts, including deviation from conventional algorithms, applicability across differing contexts, performance metrics, and efficiency of these methods. One notable area of analysis is how these frameworks handle deviations in typical search settings, especially when dealing with infinite or dynamic state spaces.

Implications and Future Directions

The survey underscores the importance of a comprehensive framework for LLM-based inference, particularly in environments where decision-making involves complex, language-driven reasoning tasks. Practically, the work can guide research engineers and developers in implementing modular, reusable components for LLM-based applications, thereby fostering broader integration across differing AI tasks.

Theoretically, this paper advances our understanding of how LLMs can be better profiled and operationalized within structured algorithmic frameworks, driving future developments that can incorporate more sophisticated LLM designs, including fine-tuning LLMs for enhanced inference roles, which were noted as outside the scope of this work.

Future research may explore areas like fine-tuning or adapting LLMs to function within these frameworks more effectively, improving their action space management and aligning inference capabilities with real-world decision-making. As AI models like LLMs advance, bridging these knowledge gaps will remain crucial in optimizing search-based methodologies for dynamic and challenging environments.

Overall, this work solidifies a foundation in search-augmented LLM inference and sets the groundwork for future endeavors looking to refine and expand the capabilities of LLMs in decision-centric AI applications.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)