Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantifying Hypothesis Space Misspecification in Learning from Human-Robot Demonstrations and Physical Corrections (2002.00941v2)

Published 3 Feb 2020 in cs.RO, cs.AI, cs.HC, cs.LG, and stat.ML

Abstract: Human input has enabled autonomous systems to improve their capabilities and achieve complex behaviors that are otherwise challenging to generate automatically. Recent work focuses on how robots can use such input - like demonstrations or corrections - to learn intended objectives. These techniques assume that the human's desired objective already exists within the robot's hypothesis space. In reality, this assumption is often inaccurate: there will always be situations where the person might care about aspects of the task that the robot does not know about. Without this knowledge, the robot cannot infer the correct objective. Hence, when the robot's hypothesis space is misspecified, even methods that keep track of uncertainty over the objective fail because they reason about which hypothesis might be correct, and not whether any of the hypotheses are correct. In this paper, we posit that the robot should reason explicitly about how well it can explain human inputs given its hypothesis space and use that situational confidence to inform how it should incorporate human input. We demonstrate our method on a 7 degree-of-freedom robot manipulator in learning from two important types of human input: demonstrations of manipulation tasks, and physical corrections during the robot's task execution.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Andreea Bobu (21 papers)
  2. Andrea Bajcsy (36 papers)
  3. Jaime F. Fisac (35 papers)
  4. Sampada Deglurkar (6 papers)
  5. Anca D. Dragan (70 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.