Papers
Topics
Authors
Recent
2000 character limit reached

Ensembling Large Language Models with Process Reward-Guided Tree Search for Better Complex Reasoning (2412.15797v1)

Published 20 Dec 2024 in cs.CL

Abstract: Despite recent advances in LLMs, open-source models often struggle to consistently perform well on complex reasoning tasks. Existing ensemble methods, whether applied at the token or output levels, fail to address these challenges. In response, we present LLM Ensemble with Monte Carlo Tree Search (LE-MCTS), a novel framework for process-level ensembling of LLMs. LE-MCTS formulates step-by-step reasoning with an ensemble of LLMs as a Markov decision process. In this framework, states represent intermediate reasoning paths, while actions consist of generating the next reasoning step using one of the LLMs selected from a predefined pool. Guided by a process-based reward model, LE-MCTS performs a tree search over the reasoning steps generated by different LLMs, identifying the most accurate reasoning chain. Experimental results on five mathematical reasoning benchmarks demonstrate that our approach outperforms both single LLM decoding algorithms and LLM ensemble methods. Notably, LE-MCTS improves performance by 3.6% and 4.3% on the MATH and MQA datasets, respectively, highlighting its effectiveness in solving complex reasoning problems.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 3 tweets and received 8 likes.

Upgrade to Pro to view all of the tweets about this paper: