Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Understand Goal Specifications by Modelling Reward (1806.01946v4)

Published 5 Jun 2018 in cs.AI and cs.LG

Abstract: Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards. However, this places on environment designers the onus of designing language-conditional reward functions which may not be easily or tractably implemented as the complexity of the environment and the language scales. To overcome this limitation, we present a framework within which instruction-conditional RL agents are trained using rewards obtained not from the environment, but from reward models which are jointly trained from expert examples. As reward models improve, they learn to accurately reward agents for completing tasks for environment configurations---and for instructions---not present amongst the expert data. This framework effectively separates the representation of what instructions require from how they can be executed. In a simple grid world, it enables an agent to learn a range of commands requiring interaction with blocks and understanding of spatial relations and underspecified abstract arrangements. We further show the method allows our agent to adapt to changes in the environment without requiring new expert examples.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dzmitry Bahdanau (46 papers)
  2. Felix Hill (52 papers)
  3. Jan Leike (49 papers)
  4. Edward Hughes (40 papers)
  5. Arian Hosseini (13 papers)
  6. Pushmeet Kohli (116 papers)
  7. Edward Grefenstette (66 papers)
Citations (153)

Summary

We haven't generated a summary for this paper yet.