Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Affordance Perception by a Knowledge-Guided Vision-Language Model with Efficient Error Correction (2407.13368v1)

Published 18 Jul 2024 in cs.CV

Abstract: Mobile robot platforms will increasingly be tasked with activities that involve grasping and manipulating objects in open world environments. Affordance understanding provides a robot with means to realise its goals and execute its tasks, e.g. to achieve autonomous navigation in unknown buildings where it has to find doors and ways to open these. In order to get actionable suggestions, robots need to be able to distinguish subtle differences between objects, as they may result in different action sequences: doorknobs require grasp and twist, while handlebars require grasp and push. In this paper, we improve affordance perception for a robot in an open-world setting. Our contribution is threefold: (1) We provide an affordance representation with precise, actionable affordances; (2) We connect this knowledge base to a foundational vision-LLMs (VLM) and prompt the VLM for a wider variety of new and unseen objects; (3) We apply a human-in-the-loop for corrections on the output of the VLM. The mix of affordance representation, image detection and a human-in-the-loop is effective for a robot to search for objects to achieve its goals. We have demonstrated this in a scenario of finding various doors and the many different ways to open them.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Gertjan Burghouts (14 papers)
  2. Marianne Schaaphok (1 paper)
  3. Michael van Bekkum (5 papers)
  4. Wouter Meijer (2 papers)
  5. Fieke Hillerström (8 papers)
  6. Jelle van Mil (3 papers)

Summary

We haven't generated a summary for this paper yet.