Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sample-Efficient Reinforcement Learning with Maximum Entropy Mellowmax Episodic Control (1911.09615v1)

Published 21 Nov 2019 in cs.LG, cs.NE, and stat.ML

Abstract: Deep networks have enabled reinforcement learning to scale to more complex and challenging domains, but these methods typically require large quantities of training data. An alternative is to use sample-efficient episodic control methods: neuro-inspired algorithms which use non-/semi-parametric models that predict values based on storing and retrieving previously experienced transitions. One way to further improve the sample efficiency of these approaches is to use more principled exploration strategies. In this work, we therefore propose maximum entropy mellowmax episodic control (MEMEC), which samples actions according to a Boltzmann policy with a state-dependent temperature. We demonstrate that MEMEC outperforms other uncertainty- and softmax-based exploration methods on classic reinforcement learning environments and Atari games, achieving both more rapid learning and higher final rewards.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Marta Sarrico (2 papers)
  2. Kai Arulkumaran (23 papers)
  3. Andrea Agostinelli (11 papers)
  4. Pierre Richemond (4 papers)
  5. Anil Anthony Bharath (15 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.