Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ProAgent: Building Proactive Cooperative Agents with Large Language Models (2308.11339v3)

Published 22 Aug 2023 in cs.AI, cs.LG, and cs.MA
ProAgent: Building Proactive Cooperative Agents with Large Language Models

Abstract: Building agents with adaptive behavior in cooperative tasks stands as a paramount goal in the realm of multi-agent systems. Current approaches to developing cooperative agents rely primarily on learning-based methods, whose policy generalization depends heavily on the diversity of teammates they interact with during the training phase. Such reliance, however, constrains the agents' capacity for strategic adaptation when cooperating with unfamiliar teammates, which becomes a significant challenge in zero-shot coordination scenarios. To address this challenge, we propose ProAgent, a novel framework that harnesses LLMs to create proactive agents capable of dynamically adapting their behavior to enhance cooperation with teammates. ProAgent can analyze the present state, and infer the intentions of teammates from observations. It then updates its beliefs in alignment with the teammates' subsequent actual behaviors. Moreover, ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various of coordination scenarios. Experimental evaluations conducted within the Overcooked-AI environment unveil the remarkable performance superiority of ProAgent, outperforming five methods based on self-play and population-based training when cooperating with AI agents. Furthermore, in partnered with human proxy models, its performance exhibits an average improvement exceeding 10% compared to the current state-of-the-art method. For more information about our project, please visit~\url{https://pku-proagent.github.io}.

Introduction

LLMs have carved out a formidable niche in the world of AGI research, offering expansive capabilities that extend far beyond mere text generation. These sophisticated models, which draw from substantial volumes of training data, have been successful in demonstrating a keen grasp of common sense knowledge, allowing them to interact and make decisions in real time. Yet, a substantial share of research has been directed at leveraging LLMs to perform tasks individually rather than in cooperation with others. A newly introduced framework aims to bridge this gap—ProAgent harnesses the power of LLMs, integrating proactive and cooperative capacities to interact constructively with partner agents.

Cooperative Framework

ProAgent marks a departure from traditional approaches that heavily rely on historical data for policy generalization. Instead, it proactively anticipates the actions of teammate agents, facilitating the formation of coherent and actionable plans. One of the cornerstone features of ProAgent is its cooperative reasoning, which allows it to adjust dynamically to the decisions of other agents. The framework possesses elements of modularity and interpretability, which make integration into various coordination scenarios seamless. An array of experiments within the Overcook-AI environment signifies ProAgent's superior performance over existing methods, showcasing, on average, a ten percent improvement when cooperating with counterparts that mirror human behavior.

Design and Modules

ProAgent consists of several central modules: Planner, Verificator, Memory, and a Belief Correction mechanism. These modules work in unison to empower ProAgent with adaptable cooperative reasoning and planning. The Planner is involved in analyzing a given situation and predicting teammate intentions, which subsequently informs the agent's skill planning. In cases where predicted skills are not viable, the Verificator steps in, providing insight into why certain skills fail and prompting a re-plan. The Memory module captures the trajectory of actions and analyses, aiding the continuous refinement of ProAgent's behavior. Belief Correction further intensifies this refinement by aligning the agent's belief with the actual behaviors of teammates.

Experiments and Contributions

ProAgent was put to the test in a multi-agent coordination suite known as Overcooked-AI, where it demonstrated its proficiency in aligning with various types of AI teammates and exhibited a preference for working with teammate models that emulate human behavior. These experiments underline ProAgent's comprehensive contributions to cooperative AI. ProAgent integrates LLMs into cooperative settings, showcases its ability to infer teammate intentions transparently, and underscores its capacity to work alongside a wide spectrum of teammates.

The findings from this research highlight ProAgent's viability as a sophisticated tool capable of navigating complex cooperative scenarios. Its transparent and modular structural design, combined with the robust performance exhibited in experimental evaluations, signify a noteworthy step forward in the development of cooperative AI agents that can understand and anticipate the needs and actions of their partners.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Ceyao Zhang (11 papers)
  2. Kaijie Yang (10 papers)
  3. Siyi Hu (21 papers)
  4. Zihao Wang (216 papers)
  5. Guanghe Li (3 papers)
  6. Yihang Sun (8 papers)
  7. Cheng Zhang (388 papers)
  8. Zhaowei Zhang (25 papers)
  9. Anji Liu (35 papers)
  10. Song-Chun Zhu (216 papers)
  11. Xiaojun Chang (148 papers)
  12. Junge Zhang (47 papers)
  13. Feng Yin (36 papers)
  14. Yitao Liang (53 papers)
  15. Yaodong Yang (169 papers)
Citations (57)