Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Chain of Tools: Large Language Model is an Automatic Multi-tool Learner (2405.16533v1)

Published 26 May 2024 in cs.CL

Abstract: Augmenting LLMs with external tools has emerged as a promising approach to extend their utility, empowering them to solve practical tasks. Existing work typically empowers LLMs as tool users with a manually designed workflow, where the LLM plans a series of tools in a step-by-step manner, and sequentially executes each tool to obtain intermediate results until deriving the final answer. However, they suffer from two challenges in realistic scenarios: (1) The handcrafted control flow is often ad-hoc and constraints the LLM to local planning; (2) The LLM is instructed to use only manually demonstrated tools or well-trained Python functions, which limits its generalization to new tools. In this work, we first propose Automatic Tool Chain (ATC), a framework that enables the LLM to act as a multi-tool user, which directly utilizes a chain of tools through programming. To scale up the scope of the tools, we next propose a black-box probing method. This further empowers the LLM as a tool learner that can actively discover and document tool usages, teaching themselves to properly master new tools. For a comprehensive evaluation, we build a challenging benchmark named ToolFlow, which diverges from previous benchmarks by its long-term planning scenarios and complex toolset. Experiments on both existing datasets and ToolFlow illustrate the superiority of our framework. Analysis on different settings also validates the effectiveness and the utility of our black-box probing algorithm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Zhengliang Shi (15 papers)
  2. Shen Gao (49 papers)
  3. Xiuyi Chen (15 papers)
  4. Yue Feng (55 papers)
  5. Lingyong Yan (29 papers)
  6. Haibo Shi (9 papers)
  7. Dawei Yin (165 papers)
  8. Zhumin Chen (78 papers)
  9. Suzan Verberne (57 papers)
  10. Zhaochun Ren (117 papers)
Citations (10)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets