Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time (2405.16265v4)

Published 25 May 2024 in cs.LG

Abstract: Although LLMs achieve remarkable performance across various tasks, they often struggle with complex reasoning tasks, such as answering mathematical questions. Recent efforts to address this issue have primarily focused on leveraging mathematical datasets through supervised fine-tuning or self-improvement techniques. However, these methods often depend on high-quality datasets that are difficult to prepare, or they require substantial computational resources for fine-tuning. Inspired by findings that LLMs know how to produce the right answer but struggle to select the correct reasoning path, we propose a purely inference-based searching method -- MindStar (M*). This method formulates reasoning tasks as searching problems and proposes two search ideas to identify the optimal reasoning paths. We evaluate the M* framework on both the GSM8K and MATH datasets, comparing its performance with existing open and closed-source LLMs. Our results demonstrate that M* significantly enhances the reasoning abilities of open-source models, such as Llama-2-13B and Mistral-7B, and achieves comparable performance to GPT-3.5 and Grok-1, but with substantially reduced model size and computational costs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Jikun Kang (7 papers)
  2. Xin Zhe Li (1 paper)
  3. Xi Chen (1035 papers)
  4. Amirreza Kazemi (4 papers)
  5. Boxing Chen (67 papers)
  6. Dong Li (429 papers)
  7. Feng Wen (19 papers)
  8. Jianye Hao (185 papers)
  9. Qianyi Sun (3 papers)
  10. Xu He (66 papers)
  11. Quan He (4 papers)
  12. Jun Yao (36 papers)
Citations (9)