Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Retrospective Analysis of the 2019 MineRL Competition on Sample Efficient Reinforcement Learning (2003.05012v4)

Published 10 Mar 2020 in cs.LG, cs.AI, and stat.ML

Abstract: To facilitate research in the direction of sample efficient reinforcement learning, we held the MineRL Competition on Sample Efficient Reinforcement Learning Using Human Priors at the Thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019). The primary goal of this competition was to promote the development of algorithms that use human demonstrations alongside reinforcement learning to reduce the number of samples needed to solve complex, hierarchical, and sparse environments. We describe the competition, outlining the primary challenge, the competition design, and the resources that we provided to the participants. We provide an overview of the top solutions, each of which use deep reinforcement learning and/or imitation learning. We also discuss the impact of our organizational decisions on the competition and future directions for improvement.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Stephanie Milani (23 papers)
  2. Nicholay Topin (17 papers)
  3. Brandon Houghton (13 papers)
  4. William H. Guss (7 papers)
  5. Sharada P. Mohanty (5 papers)
  6. Keisuke Nakata (2 papers)
  7. Oriol Vinyals (116 papers)
  8. Noboru Sean Kuno (2 papers)
Citations (27)

Summary

We haven't generated a summary for this paper yet.