Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semi-supervised reward learning for offline reinforcement learning (2012.06899v1)

Published 12 Dec 2020 in cs.LG, cs.AI, and cs.RO

Abstract: In offline reinforcement learning (RL) agents are trained using a logged dataset. It appears to be the most natural route to attack real-life applications because in domains such as healthcare and robotics interactions with the environment are either expensive or unethical. Training agents usually requires reward functions, but unfortunately, rewards are seldom available in practice and their engineering is challenging and laborious. To overcome this, we investigate reward learning under the constraint of minimizing human reward annotations. We consider two types of supervision: timestep annotations and demonstrations. We propose semi-supervised learning algorithms that learn from limited annotations and incorporate unlabelled data. In our experiments with a simulated robotic arm, we greatly improve upon behavioural cloning and closely approach the performance achieved with ground truth rewards. We further investigate the relationship between the quality of the reward model and the final policies. We notice, for example, that the reward models do not need to be perfect to result in useful policies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ksenia Konyushkova (16 papers)
  2. Konrad Zolna (24 papers)
  3. Yusuf Aytar (36 papers)
  4. Alexander Novikov (30 papers)
  5. Scott Reed (32 papers)
  6. Serkan Cabi (15 papers)
  7. Nando de Freitas (98 papers)
Citations (23)