Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Behavioural Cloning with Positive Unlabeled Learning (2301.11734v2)

Published 27 Jan 2023 in cs.LG and cs.RO

Abstract: Learning control policies offline from pre-recorded datasets is a promising avenue for solving challenging real-world problems. However, available datasets are typically of mixed quality, with a limited number of the trajectories that we would consider as positive examples; i.e., high-quality demonstrations. Therefore, we propose a novel iterative learning algorithm for identifying expert trajectories in unlabeled mixed-quality robotics datasets given a minimal set of positive examples, surpassing existing algorithms in terms of accuracy. We show that applying behavioral cloning to the resulting filtered dataset outperforms several competitive offline reinforcement learning and imitation learning baselines. We perform experiments on a range of simulated locomotion tasks and on two challenging manipulation tasks on a real robotic system; in these experiments, our method showcases state-of-the-art performance. Our website: \url{https://sites.google.com/view/offline-policy-learning-pubc}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Qiang Wang (271 papers)
  2. Robert McCarthy (11 papers)
  3. David Cordova Bulens (8 papers)
  4. Kevin McGuinness (76 papers)
  5. Noel E. O'Connor (70 papers)
  6. Nico Gürtler (9 papers)
  7. Felix Widmaier (11 papers)
  8. Francisco Roldan Sanchez (9 papers)
  9. Stephen J. Redmond (14 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.