Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generalizable Episodic Memory for Deep Reinforcement Learning (2103.06469v3)

Published 11 Mar 2021 in cs.LG and cs.AI

Abstract: Episodic memory-based methods can rapidly latch onto past successful strategies by a non-parametric memory and improve sample efficiency of traditional reinforcement learning. However, little effort is put into the continuous domain, where a state is never visited twice, and previous episodic methods fail to efficiently aggregate experience across trajectories. To address this problem, we propose Generalizable Episodic Memory (GEM), which effectively organizes the state-action values of episodic memory in a generalizable manner and supports implicit planning on memorized trajectories. GEM utilizes a double estimator to reduce the overestimation bias induced by value propagation in the planning process. Empirical evaluation shows that our method significantly outperforms existing trajectory-based methods on various MuJoCo continuous control tasks. To further show the general applicability, we evaluate our method on Atari games with discrete action space, which also shows a significant improvement over baseline algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hao Hu (114 papers)
  2. Jianing Ye (7 papers)
  3. Guangxiang Zhu (8 papers)
  4. Zhizhou Ren (13 papers)
  5. Chongjie Zhang (68 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.