Introduction
The integration of LLMs with APIs to accomplish complex tasks has been a focal area of interest in AI research. Open-source models such as LLaMA have shown versatility through various instruction tuning approaches. However, their capabilities in tool-use domains, specifically interacting with external tools or APIs to adhere to complex human instructions, are yet to be on par with state-of-the-art (SOTA) closed-source models like ChatGPT. To address this, a novel framework named ToolLLM has been presented, aimed at enabling open-source LLMs to competently master a wide array of real-world APIs.
Dataset Construction
The construction of the ToolBench dataset is a central aspect of this framework. ToolBench is designed to help LLMs learn to execute APIs and generalize to new ones not encountered during the training phase. The dataset spans 16,464 REST APIs across 49 categories and is devised in stages: collecting APIs, generating diverse instructions, and annotating solution paths. This dataset is unique in its coverage of both single-tool and multi-tool scenarios and is automatically constructed using ChatGPT, minimizing the need for human supervision. A distinct depth-first search-based decision tree (DFSDT) algorithm enhances LLMs' reasoning, enabling them to manage multiple reasoning traces and improve upon existing models like ReACT.
Evaluation and Model Training
ToolEval, the automated evaluator developed alongside ToolBench, offers metrics that quantify an LLM's ability to execute instructions effectively. The fine-tuned LLaMA model, referred to as ToolLLaMA, is equipped with a neural API retriever and demonstrates impressive capabilities in executing complex instructions with performance comparable to ChatGPT and strong generalization abilities, even on out-of-distribution tool-use datasets. The neural API retriever dispenses with the requirement for manual API selection amid a large collection, showcasing excellent precision in API recommendations.
Insights and Generalization
ToolLLaMA offers compelling evidence regarding the adaptability of open-source LLMs to unseen instructions and tools, showing results that rival those of the teacher model, ChatGPT. The generalization capabilities extend to an OOD dataset called APIBench, where ToolLLaMA, even without training on APIBench's domains, demonstrates a noteworthy performance. Notably, ToolLLaMA combined with the API retriever surpasses the performance when utilizing ground truth APIs, arguably due to its ability to identify more appropriate APIs for a given instruction among the extensive database.
In conclusion, ToolLLM stands as a comprehensive framework that imparts high-level tool-use competencies in open-source LLMs, promoting the democratization of AI technologies and community-driven innovation. The methodologies developed within this framework, including ToolBench, DFSDT, ToolEval, and integrative API retrieval, highlight the future trajectory of instruction tuning and tool usage in LLMs.