Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Relational Reasoning via Set Transformers: Provable Efficiency and Applications to MARL (2209.09845v3)

Published 20 Sep 2022 in cs.LG, cs.MA, and stat.ML

Abstract: The cooperative Multi-A gent R einforcement Learning (MARL) with permutation invariant agents framework has achieved tremendous empirical successes in real-world applications. Unfortunately, the theoretical understanding of this MARL problem is lacking due to the curse of many agents and the limited exploration of the relational reasoning in existing works. In this paper, we verify that the transformer implements complex relational reasoning, and we propose and analyze model-free and model-based offline MARL algorithms with the transformer approximators. We prove that the suboptimality gaps of the model-free and model-based algorithms are independent of and logarithmic in the number of agents respectively, which mitigates the curse of many agents. These results are consequences of a novel generalization error bound of the transformer and a novel analysis of the Maximum Likelihood Estimate (MLE) of the system dynamics with the transformer. Our model-based algorithm is the first provably efficient MARL algorithm that explicitly exploits the permutation invariance of the agents. Our improved generalization bound may be of independent interest and is applicable to other regression problems related to the transformer beyond MARL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fengzhuo Zhang (15 papers)
  2. Boyi Liu (49 papers)
  3. Kaixin Wang (30 papers)
  4. Vincent Y. F. Tan (205 papers)
  5. Zhuoran Yang (155 papers)
  6. Zhaoran Wang (164 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.