Emergent Mind

Small LLMs Are Weak Tool Learners: A Multi-LLM Agent

(2401.07324)
Published Jan 14, 2024 in cs.AI and cs.CL

Abstract

Large Language Model (LLM) agents significantly extend the capabilities of standalone LLMs, empowering them to interact with external tools (e.g., APIs, functions) and complete complex tasks in a self-directed fashion. The challenge of tool use demands that LLMs not only understand user queries and generate answers but also excel in task planning, memory management, tool invocation, and result summarization. While traditional approaches focus on training a single LLM with all these capabilities, performance limitations become apparent, particularly with smaller models. Moreover, the entire LLM may require retraining when tools are updated. To overcome these challenges, we propose a novel strategy that decomposes the aforementioned capabilities into a planner, caller, and summarizer. Each component is implemented by a single LLM that focuses on a specific capability and collaborates with other components to accomplish the task. This modular framework facilitates individual updates and the potential use of smaller LLMs for building each capability. To effectively train this framework, we introduce a two-stage training paradigm. First, we fine-tune a backbone LLM on the entire dataset without discriminating sub-tasks, providing the model with a comprehensive understanding of the task. Second, the fine-tuned LLM is used to instantiate the planner, caller, and summarizer respectively, which are continually fine-tuned on respective sub-tasks. Evaluation across various tool-use benchmarks illustrates that our proposed multi-LLM framework surpasses the traditional single-LLM approach, highlighting its efficacy and advantages in tool learning.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a detailed summary of this paper with a premium account.

We ran into a problem analyzing this paper.

Please try again later (sorry!).

Get summaries of trending AI papers delivered straight to your inbox

Unsubscribe anytime.

YouTube
References
  1. FireAct: Toward Language Agent Fine-tuning
  2. Scaling Instruction-Finetuned Language Models
  3. Training Verifiers to Solve Math Word Problems
  4. How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition
  5. ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving
  6. Significant Gravitas. 2023. Autogpt: the heart of the open-source agent ecosystem.
  7. Measuring mathematical problem solving with the math dataset. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
  8. MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
  9. Modelscope-agent: Building your customizable agent system with open-source large language models
  10. Yohei Nakajima. 2023. Babyagi.
  11. WebGPT: Browser-assisted question-answering with human feedback
  12. OpenAI. 2022. Chatgpt: Conversational ai language model. Website. https://openai.com/chatgpt.

  13. OpenAI. 2023a. Gpt-4 code interpreter.
  14. OpenAI. 2023b. Gpt-4 technical report.
  15. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pages 1โ€“22.
  16. Gorilla: Large Language Model Connected with Massive APIs
  17. Communicative Agents for Software Development
  18. Tool Learning with Foundation Models
  19. Toolllm: Facilitating large language models to master 16000+ real-world apis
  20. Zero-infinity: Breaking the gpu memory wall for extreme scale deep learning. In SC21: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1โ€“15. IEEE Computer Society.
  21. Toolformer: Language Models Can Teach Themselves to Use Tools
  22. HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
  23. Reflexion: Language agents with verbal reinforcement learning. In Thirty-seventh Conference on Neural Information Processing Systems.
  24. ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
  25. LLaMA: Open and Efficient Foundation Language Models
  26. Llama 2: Open Foundation and Fine-Tuned Chat Models
  27. Voyager: An Open-Ended Embodied Agent with Large Language Models
  28. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations.
  29. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824โ€“24837.
  30. Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models
  31. Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models
  32. AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
  33. ChatGPT is not Enough: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling
  34. GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction
  35. MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action
  36. Tree of Thoughts: Deliberate Problem Solving with Large Language Models
  37. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations.
  38. AgentTuning: Enabling Generalized Agent Abilities for LLMs
  39. MemoryBank: Enhancing Large Language Models with Long-Term Memory
  40. Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory

Show All 40

Test Your Knowledge

You answered out of questions correctly.

Well done!