Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Offline Learning from Demonstrations and Unlabeled Experience (2011.13885v1)

Published 27 Nov 2020 in cs.LG, cs.AI, cs.RO, and stat.ML

Abstract: Behavior cloning (BC) is often practical for robot learning because it allows a policy to be trained offline without rewards, by supervised learning on expert demonstrations. However, BC does not effectively leverage what we will refer to as unlabeled experience: data of mixed and unknown quality without reward annotations. This unlabeled data can be generated by a variety of sources such as human teleoperation, scripted policies and other agents on the same robot. Towards data-driven offline robot learning that can use this unlabeled experience, we introduce Offline Reinforced Imitation Learning (ORIL). ORIL first learns a reward function by contrasting observations from demonstrator and unlabeled trajectories, then annotates all data with the learned reward, and finally trains an agent via offline reinforcement learning. Across a diverse set of continuous control and simulated robotic manipulation tasks, we show that ORIL consistently outperforms comparable BC agents by effectively leveraging unlabeled experience.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Konrad Zolna (24 papers)
  2. Alexander Novikov (30 papers)
  3. Ksenia Konyushkova (16 papers)
  4. Caglar Gulcehre (71 papers)
  5. Ziyu Wang (137 papers)
  6. Yusuf Aytar (36 papers)
  7. Misha Denil (36 papers)
  8. Nando de Freitas (98 papers)
  9. Scott Reed (32 papers)
Citations (64)