Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stackelberg Decision Transformer for Asynchronous Action Coordination in Multi-Agent Systems (2305.07856v1)

Published 13 May 2023 in cs.MA and cs.AI

Abstract: Asynchronous action coordination presents a pervasive challenge in Multi-Agent Systems (MAS), which can be represented as a Stackelberg game (SG). However, the scalability of existing Multi-Agent Reinforcement Learning (MARL) methods based on SG is severely constrained by network structures or environmental limitations. To address this issue, we propose the Stackelberg Decision Transformer (STEER), a heuristic approach that resolves the difficulties of hierarchical coordination among agents. STEER efficiently manages decision-making processes in both spatial and temporal contexts by incorporating the hierarchical decision structure of SG, the modeling capability of autoregressive sequence models, and the exploratory learning methodology of MARL. Our research contributes to the development of an effective and adaptable asynchronous action coordination method that can be widely applied to various task types and environmental configurations in MAS. Experimental results demonstrate that our method can converge to Stackelberg equilibrium solutions and outperforms other existing methods in complex scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Bin Zhang (227 papers)
  2. Hangyu Mao (37 papers)
  3. Lijuan Li (4 papers)
  4. Zhiwei Xu (84 papers)
  5. Dapeng Li (32 papers)
  6. Rui Zhao (241 papers)
  7. Guoliang Fan (23 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.