Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The act of remembering: a study in partially observable reinforcement learning (2010.01753v1)

Published 5 Oct 2020 in cs.LG and cs.AI

Abstract: Reinforcement Learning (RL) agents typically learn memoryless policies---policies that only consider the last observation when selecting actions. Learning memoryless policies is efficient and optimal in fully observable environments. However, some form of memory is necessary when RL agents are faced with partial observability. In this paper, we study a lightweight approach to tackle partial observability in RL. We provide the agent with an external memory and additional actions to control what, if anything, is written to the memory. At every step, the current memory state is part of the agent's observation, and the agent selects a tuple of actions: one action that modifies the environment and another that modifies the memory. When the external memory is sufficiently expressive, optimal memoryless policies yield globally optimal solutions. Unfortunately, previous attempts to use external memory in the form of binary memory have produced poor results in practice. Here, we investigate alternative forms of memory in support of learning effective memoryless policies. Our novel forms of memory outperform binary and LSTM-based memory in well-established partially observable domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Rodrigo Toro Icarte (14 papers)
  2. Richard Valenzano (4 papers)
  3. Toryn Q. Klassen (11 papers)
  4. Phillip Christoffersen (2 papers)
  5. Amir-massoud Farahmand (31 papers)
  6. Sheila A. McIlraith (22 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.