Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use (2310.03128v6)

Published 4 Oct 2023 in cs.SE and cs.CL

Abstract: LLMs have garnered significant attention due to their impressive NLP capabilities. Recently, many studies have focused on the tool utilization ability of LLMs. They primarily investigated how LLMs effectively collaborate with given specific tools. However, in scenarios where LLMs serve as intelligent agents, as seen in applications like AutoGPT and MetaGPT, LLMs are expected to engage in intricate decision-making processes that involve deciding whether to employ a tool and selecting the most suitable tool(s) from a collection of available tools to fulfill user requests. Therefore, in this paper, we introduce MetaTool, a benchmark designed to evaluate whether LLMs have tool usage awareness and can correctly choose tools. Specifically, we create a dataset called ToolE within the benchmark. This dataset contains various types of user queries in the form of prompts that trigger LLMs to use tools, including both single-tool and multi-tool scenarios. Subsequently, we set the tasks for both tool usage awareness and tool selection. We define four subtasks from different perspectives in tool selection, including tool selection with similar choices, tool selection in specific scenarios, tool selection with possible reliability issues, and multi-tool selection. We conduct experiments involving eight popular LLMs and find that the majority of them still struggle to effectively select tools, highlighting the existing gaps between LLMs and genuine intelligent agents. However, through the error analysis, we found there is still significant room for improvement. Finally, we conclude with insights for tool developers -- we strongly recommend that tool developers choose an appropriate rewrite model for generating new descriptions based on the downstream LLM the tool will apply to. Our code is in https://github.com/HowieHwong/MetaTool.

MetaTool Benchmark for LLMs: Deciding Whether to Use Tools and Which to Use

The research paper titled "MetaTool Benchmark for LLMs: Deciding Whether to Use Tools and Which to Use" provides a comprehensive investigation into the integration and utilization of tools by LLMs. With the objective of transforming LLMs into more autonomous intelligent agents, the paper introduces MetaTool, a benchmark specifically designed to assess an LLM's ability to decide when and which tools to employ. The tool implementation in intelligent agents such as AutoGPT and MetaGPT requires LLMs to make complex decisions, necessitating robust evaluation mechanisms to determine the efficacy of these models in selecting the correct tools.

At the core of MetaTool is the ToolE dataset. This dataset includes a diverse array of user queries presented in prompt form designed to stimulate LLMs into tool use scenarios encompassing both single-tool and multi-tool contexts. The tasks defined for LLMs focus on tool usage awareness and precise tool selection capabilities. Tool selection subtasks are defined from four different angles, including scenarios with similar choice tools, specific conditions, potential reliability issues, and multi-tool selection contexts.

Experiments conducted with eight popular LLMs revealed a recurrent issue: most LLMs struggle in effectively selecting the appropriate tools, which points to a significant gap between current LLM capabilities and those required of true intelligent agents. The error analysis further suggests substantial room for improvement in the LLM's ability to choose and employ tools effectively.

Moreover, the research provides actionable insights for tool developers. Notably, it advocates for tool developers to select an appropriate model for rewriting descriptions, as this can help generate new, more effective descriptions tailored to the LLMs these tools will be applied to.

These findings have profound implications for both practical applications and theoretical advancements in AI. The practical impact is encapsulated in OLLM applications for enhanced data retrieval, user interaction, and task resolution through efficient tool use, paving the way for more capable and autonomous AI systems. From a theoretical perspective, these insights foster a deeper understanding of the interaction between LLMs and external tools, encouraging further exploration and development in AI autonomy and decision-making processes.

Anticipating future developments in AI, this research holds significant promise, especially in advancing the autonomous capacities of LLMs. With ongoing improvements, LLMs are expected to achieve more nuanced decision-making abilities, optimally selecting from an expanding repertoire of tools to maximize task performance across diverse and complex user scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yue Huang (171 papers)
  2. Jiawen Shi (11 papers)
  3. Yuan Li (392 papers)
  4. Chenrui Fan (9 papers)
  5. Siyuan Wu (18 papers)
  6. Qihui Zhang (13 papers)
  7. Yixin Liu (108 papers)
  8. Pan Zhou (220 papers)
  9. Yao Wan (70 papers)
  10. Neil Zhenqiang Gong (117 papers)
  11. Lichao Sun (186 papers)
Citations (61)
Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com