Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Working Memory Graphs (1911.07141v4)

Published 17 Nov 2019 in cs.LG, cs.AI, and cs.CL

Abstract: Transformers have increasingly outperformed gated RNNs in obtaining new state-of-the-art results on supervised tasks involving text sequences. Inspired by this trend, we study the question of how Transformer-based models can improve the performance of sequential decision-making agents. We present the Working Memory Graph (WMG), an agent that employs multi-head self-attention to reason over a dynamic set of vectors representing observed and recurrent state. We evaluate WMG in three environments featuring factored observation spaces: a Pathfinding environment that requires complex reasoning over past observations, BabyAI gridworld levels that involve variable goals, and Sokoban which emphasizes future planning. We find that the combination of WMG's Transformer-based architecture with factored observation spaces leads to significant gains in learning efficiency compared to baseline architectures across all tasks. WMG demonstrates how Transformer-based models can dramatically boost sample efficiency in RL environments for which observations can be factored.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ricky Loynd (6 papers)
  2. Roland Fernandez (14 papers)
  3. Asli Celikyilmaz (81 papers)
  4. Adith Swaminathan (28 papers)
  5. Matthew Hausknecht (26 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.