Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Concept Learning for Interpretable Multi-Agent Reinforcement Learning (2302.12232v1)

Published 23 Feb 2023 in cs.LG, cs.AI, and cs.RO

Abstract: Multi-agent robotic systems are increasingly operating in real-world environments in close proximity to humans, yet are largely controlled by policy models with inscrutable deep neural network representations. We introduce a method for incorporating interpretable concepts from a domain expert into models trained through multi-agent reinforcement learning, by requiring the model to first predict such concepts then utilize them for decision making. This allows an expert to both reason about the resulting concept policy models in terms of these high-level concepts at run-time, as well as intervene and correct mispredictions to improve performance. We show that this yields improved interpretability and training stability, with benefits to policy performance and sample efficiency in a simulated and real-world cooperative-competitive multi-agent game.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Renos Zabounidis (3 papers)
  2. Joseph Campbell (36 papers)
  3. Simon Stepputtis (38 papers)
  4. Dana Hughes (11 papers)
  5. Katia Sycara (93 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.