Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Reinforcement Learning with Human Assistance: An Argument for Human Subject Studies with HIPPO Gym (2102.02639v1)

Published 2 Feb 2021 in cs.LG, cs.AI, and cs.HC

Abstract: Reinforcement learning (RL) is a popular machine learning paradigm for game playing, robotics control, and other sequential decision tasks. However, RL agents often have long learning times with high data requirements because they begin by acting randomly. In order to better learn in complex tasks, this article argues that an external teacher can often significantly help the RL agent learn. OpenAI Gym is a common framework for RL research, including a large number of standard environments and agents, making RL research significantly more accessible. This article introduces our new open-source RL framework, the Human Input Parsing Platform for Openai Gym (HIPPO Gym), and the design decisions that went into its creation. The goal of this platform is to facilitate human-RL research, again lowering the bar so that more researchers can quickly investigate different ways that human teachers could assist RL agents, including learning from demonstrations, learning from feedback, or curriculum learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Matthew E. Taylor (69 papers)
  2. Nicholas Nissen (1 paper)
  3. Yuan Wang (251 papers)
  4. Neda Navidi (3 papers)
Citations (4)