Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Agent Lumos: Unified and Modular Training for Open-Source Language Agents (2311.05657v3)

Published 9 Nov 2023 in cs.AI, cs.CL, and cs.LG

Abstract: Closed-source agents suffer from several issues such as a lack of affordability, transparency, and reproducibility, particularly on complex interactive tasks. This motivates the development of open-source alternatives. We introduce LUMOS, one of the first frameworks for training open-source LLM-based agents. LUMOS features a learnable, unified, and modular architecture with a planning module that learns high-level subgoal generation, and a grounding module trained to translate these into actions using various tools in the execution module. The design allows for modular upgrades and wider applicability to diverse interactive tasks. To foster generalizable agent learning, we collect large-scale, unified, and high-quality training annotations derived from diverse ground-truth reasoning rationales across various complex interactive tasks. On 9 datasets, LUMOS exhibits several key advantages: (1) LUMOS excels multiple larger open-source agents on the held-out datasets (unused for training) for each task type. LUMOS even surpasses GPT agents on QA and web tasks; (2) LUMOS outperforms open-source agents produced by chain-of-thoughts and unmodularized integrated training; and (3) LUMOS effectively generalizes to unseen tasks, outperforming 33B-scale agents and domain-specific agents.

Citations (18)

Summary

  • The paper introduces a unified framework that modularizes agent training into planning, grounding, and execution modules.
  • It leverages 56K annotations from existing benchmarks to enhance performance and generalization across nine datasets.
  • The approach outperforms current models on QA, web, and multimodal tasks, paving the way for more adaptable open-source AI agents.

Unified and Modular Training for Open-Source Language Agents with Lumos

Overview of Lumos

In recent developments within the AI and machine learning sphere, a new framework named Lumos has been introduced for training open-source language-based agents. This novel method combines a unified approach with modularity in its architecture, aiming to improve the training process and applicability of LLMs for diverse interactive tasks.

Unifying Features and Modular Architecture

Lumos distinguishes itself through its approach to modular architecture, separating the agent's functionalities into three distinct but interrelated modules: Planning, Grounding, and Execution. This segregation allows each module to focus on a specific aspect of the task-solving process:

  • Planning Module (PM): Responsible for breaking down complex tasks into manageable subgoals.
  • Grounding Module (GM): Translates high-level subgoals into executable actions.
  • Execution Module (EM): Implements the actions using various existing tools and APIs.

This unified yet modular design makes Lumos highly adaptable; it can be upgraded or modified for new tasks and actions without affecting other modules.

Training Annotations

A key innovation within Lumos is its approach to generating training annotations. Instead of creating synthetic training data, Lumos leverages existing benchmarks across various domains, transforming their ground-truth reasoning steps into a unified format. This method not only ensures high-quality training data but also enables Lumos to be trained across a wide range of interactive tasks. With around 56K annotations derived from existing datasets, Lumos offers one of the largest and most diverse open-source resources for agent training.

Performance Evaluation

Lumos was rigorously evaluated across nine different datasets encompassing web tasks, QA, math, and multimodal tasks. The performance measures reveal several compelling insights:

  • Lumos outperforms other open-source LLM agents across the board, sometimes with significant margins. This includes notable achievements on datasets like Mind2Web, surpassing even GPT-series models on QA and web-based tasks.
  • The framework displays commendable cross-task generalization capabilities. When evaluated on unseen tasks such as WebShop and InterCodeSQL_{\mathrm{SQL}}, Lumos not only excels in adapting to new environments and actions but also overshadows domain-specific and larger-scale agents.
  • An analysis of training annotations suggests that Lumos’s annotations effectively contribute to its high performance and generalization capabilities, validating the annotation conversion methodology.

Implications and Future Directions

The development of Lumos signifies a step forward in the research and application of open-source language agents. Its modular architecture, unified training process, and convertible annotations method introduce a potent toolset for the AI community. These features not only enhance the affordability, transparency, and reproducibility of language agent research but also open avenues for further exploration in task generalizability and modular design.

Considering Lumos's demonstrated ability to generalize across tasks, future research might explore extending its capabilities to even more diverse interactive tasks. There is also potential in refining the adaptability and responsiveness of each module, possibly incorporating real-time learning and adjustment mechanisms for agents. As the landscape of artificial intelligence continues to evolve, frameworks like Lumos will play a crucial role in shaping the future of open-source, adaptable, and versatile language agent technologies.