Emergent Mind

More Agents Is All You Need

Published Feb 3, 2024 in cs.CL , cs.AI , cs.LG and


We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty. We conduct comprehensive experiments on a wide range of LLM benchmarks to verify the presence of our finding, and to study the properties that can facilitate its occurrence. Our code is publicly available at: \url{https://anonymous.4open.science/r/more_agent_is_all_you_need}.


  • The paper introduces a novel approach to enhance LLMs performance by leveraging a simple sampling-and-voting mechanism across multiple LLM agents.

  • It demonstrates that increasing the number of instantiated agents significantly improves LLMs' performance, especially in more complex tasks.

  • The methodology proposed involves generating multiple outputs through the iterative feeding of a task query into multiple LLM agents, followed by a majority vote to select the final outcome.

  • Empirical evidence shows that smaller models, when enhanced and combined in this way, can outperform larger models, challenging the focus on model size for performance improvement.

Introduction to Sampling-and-Voting in LLMs

The landscape of LLMs has been primarily shaped by innovations aimed at enhancing their performance and applicability across a broad spectrum of tasks. Despite the remarkable capabilities showcased by these models in language generation, understanding, and reasoning, their performance tends to falter when faced with more intricate tasks. Recent studies have underscored the utility of ensemble methods and frameworks that leverage multiple LLM agents to surmount these challenges. These approaches have shown promising results, enhancing the models' reasoning abilities and output accuracy.

A Novel Perspective on Scaling LLM Agents

A groundbreaking study titled "More Agents Is All You Need" presents a straightforward yet effective strategy to boost LLM performance. By implementing a simple sampling-and-voting mechanism, researchers have demonstrated that the performance of LLMs scales with the number of agents instantiated. This finding is significant as it extends beyond the scope of existing methods, offering a complementary avenue to amplify LLM performance. Intriguingly, the study reveals that such enhancements are conspicuously correlated with the task difficulty, suggesting that more complex problems stand to benefit more from this approach.

Methodological Insights

The study pioneers a comprehensive exploration into the scaling property of LLM agents, proposing a two-phased method: sampling and voting. This process involves generating multiple outputs through iterative feeding of a task query into either a single LLM or a collaboration framework of multiple LLM-Agents, followed by a majority voting mechanism to select the final outcome. The simplicity and efficiency of this method are underscored by its compatibility with and potential to enrich existing sophisticated models and methods. Through extensive experimentation across varied tasks and diverse LLMs, the study establishes the general applicability and significant performance gains achievable through increasing the ensemble size of LLM agents.

Empirical Findings and Contributions

Robust experiments conducted across numerous benchmarks reveal that a brute-force ensemble of smaller LLMs can achieve comparable or even superior performance to their larger counterparts. Astonishingly, enhanced smaller models have outperformed larger models in specific tasks, challenging the conventional emphasis on model size for performance improvement. Furthermore, this method's compatibility with other enhancement techniques has been validated, demonstrating its potential to serve as a universally beneficial plug-in to augment performance across the board.

Analyzing the Efficacy Across Task Difficulties

Critical examination of the method's effectiveness in relation to task difficulty yields fascinating insights. Through meticulously designed experiments encompassing various dimensions of problem complexity, the study delineates clear patterns between the performance gains and the inherent difficulty, the reasoning steps involved, and the prior probability of correct answers. These findings not only deepen our understanding of the method's dynamics but also pave the way for optimized strategies to leverage the "More Agents" approach effectively.

Concluding Thoughts and Future Horizons

This seminal work contributes profoundly to our comprehension of LLM performance scaling through the instantiation of multiple agents. The simplicity of the sampling-and-voting method, coupled with its broad applicability and the significant performance improvements it engenders, marks a pivotal advancement in the field of generative AI and LLMs. Looking ahead, the paper acknowledges the need for optimizing the resource-intensive nature of scaling agent numbers and invites future research to build on these foundational insights. The exploration of methodologies to mitigate potential risks associated with model hallucinations remains an essential frontier for ensuring the responsible evolution of LLMs.

In summation, "More Agents Is All You Need" stands as a beacon of innovation, illuminating new pathways to harness the full potential of LLMs in tackling complex tasks with unparalleled effectiveness and efficiency. The implications of this study extend far beyond its immediate findings, heralding a new era of research and application in the realm of artificial intelligence.

Get summaries of trending AI/ML papers delivered straight to your inbox

Unsubscribe anytime.

More Agents Is All You Need (3 points, 0 comments)
More Agents Is All You Need (2 points, 0 comments)