Confucius: Iterative Tool Learning from Introspection Feedback by Easy-to-Difficult Curriculum (2308.14034v2)
Abstract: Augmenting LLMs with external tools has emerged as a promising approach to extending the capability of LLMs. Although some works employ open-source LLMs for the tool learning task, most of them are trained in a controlled environment in which LLMs only learn to execute the human-provided tools. However, selecting proper tools from the large toolset is also a crucial ability for the tool learning model to be applied in real-world applications. Existing methods usually directly employ self-instruction methods to train the model, which ignores differences in tool complexity. In this paper, we propose the Confucius, a novel tool learning framework to train LLM to use complicated tools in real-world scenarios, which contains two main phases: (1) We first propose a multi-stage learning method to teach the LLM to use various tools from an easy-to-difficult curriculum; (2) thenceforth, we propose the Iterative Self-instruct from Introspective Feedback (ISIF) to dynamically construct the dataset to improve the ability to use the complicated tool. Extensive experiments conducted on both controlled and real-world settings demonstrate the superiority of our tool learning framework in the real-world application scenarios compared to both tuning-free (e.g. ChatGPT, Claude) and tuning-based baselines (e.g. GPT4Tools).
- Shen Gao (49 papers)
- Zhengliang Shi (15 papers)
- Minghang Zhu (2 papers)
- Bowen Fang (5 papers)
- Xin Xin (49 papers)
- Pengjie Ren (95 papers)
- Zhumin Chen (78 papers)
- Jun Ma (347 papers)
- Zhaochun Ren (117 papers)