Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Hierarchical Teaching Policies for Cooperative Agents (1903.03216v6)

Published 7 Mar 2019 in cs.LG, cs.AI, and cs.MA

Abstract: Collective learning can be greatly enhanced when agents effectively exchange knowledge with their peers. In particular, recent work studying agents that learn to teach other teammates has demonstrated that action advising accelerates team-wide learning. However, the prior work has simplified the learning of advising policies by using simple function approximations and only considered advising with primitive (low-level) actions, limiting the scalability of learning and teaching to complex domains. This paper introduces a novel learning-to-teach framework, called hierarchical multiagent teaching (HMAT), that improves scalability to complex environments by using the deep representation for student policies and by advising with more expressive extended action sequences over multiple levels of temporal abstraction. Our empirical evaluations demonstrate that HMAT improves team-wide learning progress in large, complex domains where previous approaches fail. HMAT also learns teaching policies that can effectively transfer knowledge to different teammates with knowledge of different tasks, even when the teammates have heterogeneous action spaces.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Dong-Ki Kim (21 papers)
  2. Miao Liu (98 papers)
  3. Shayegan Omidshafiei (34 papers)
  4. Sebastian Lopez-Cot (2 papers)
  5. Matthew Riemer (32 papers)
  6. Golnaz Habibi (15 papers)
  7. Gerald Tesauro (29 papers)
  8. Sami Mourad (3 papers)
  9. Murray Campbell (27 papers)
  10. Jonathan P. How (159 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.