Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Evolving Tools for Large Language Models (2410.06617v2)

Published 9 Oct 2024 in cs.CL and cs.AI

Abstract: Tool learning enables LLMs to interact with external tools and APIs, greatly expanding the application scope of LLMs. However, due to the dynamic nature of external environments, these tools and APIs may become outdated over time, preventing LLMs from correctly invoking tools. Existing research primarily focuses on static environments and overlooks this issue, limiting the adaptability of LLMs in real-world applications. In this paper, we propose ToolEVO, a novel framework designed to enhance the adaptive and reflective capabilities of LLMs against tool variability. By leveraging Monte Carlo Tree Search, ToolEVO facilitates active exploration and interaction of LLMs within dynamic environments, allowing for autonomous self-reflection and self-updating of tool usage based on environmental feedback. Additionally, we introduce ToolQA-D, a benchmark specifically designed to evaluate the impact of tool variability. Extensive experiments demonstrate the effectiveness and stability of our approach, highlighting the importance of adaptability to tool variability for effective tool learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Guoxin Chen (16 papers)
  2. Zhong Zhang (42 papers)
  3. Xin Cong (46 papers)
  4. Fangda Guo (10 papers)
  5. Yesai Wu (11 papers)
  6. Yankai Lin (125 papers)
  7. Wenzheng Feng (8 papers)
  8. Yasheng Wang (91 papers)