Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discovering Reinforcement Learning Algorithms (2007.08794v3)

Published 17 Jul 2020 in cs.LG and cs.AI

Abstract: Reinforcement learning (RL) algorithms update an agent's parameters according to one of several possible rules, discovered manually through years of research. Automating the discovery of update rules from data could lead to more efficient algorithms, or algorithms that are better adapted to specific environments. Although there have been prior attempts at addressing this significant scientific challenge, it remains an open question whether it is feasible to discover alternatives to fundamental concepts of RL such as value functions and temporal-difference learning. This paper introduces a new meta-learning approach that discovers an entire update rule which includes both 'what to predict' (e.g. value functions) and 'how to learn from it' (e.g. bootstrapping) by interacting with a set of environments. The output of this method is an RL algorithm that we call Learned Policy Gradient (LPG). Empirical results show that our method discovers its own alternative to the concept of value functions. Furthermore it discovers a bootstrapping mechanism to maintain and use its predictions. Surprisingly, when trained solely on toy environments, LPG generalises effectively to complex Atari games and achieves non-trivial performance. This shows the potential to discover general RL algorithms from data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Junhyuk Oh (27 papers)
  2. Matteo Hessel (28 papers)
  3. Wojciech M. Czarnecki (15 papers)
  4. Zhongwen Xu (33 papers)
  5. Hado van Hasselt (57 papers)
  6. Satinder Singh (80 papers)
  7. David Silver (67 papers)
Citations (121)

Summary

We haven't generated a summary for this paper yet.