Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
12 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
37 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Bridging Visual Perception with Contextual Semantics for Understanding Robot Manipulation Tasks (1909.07459v2)

Published 16 Sep 2019 in cs.CV

Abstract: Understanding manipulation scenarios allows intelligent robots to plan for appropriate actions to complete a manipulation task successfully. It is essential for intelligent robots to semantically interpret manipulation knowledge by describing entities, relations and attributes in a structural manner. In this paper, we propose an implementing framework to generate high-level conceptual dynamic knowledge graphs from video clips. A combination of a Vision-LLM and an ontology system, in correspondence with visual perception and contextual semantics, is used to represent robot manipulation knowledge with Entity-Relation-Entity (E-R-E) and Entity-Attribute-Value (E-A-V) tuples. The proposed method is flexible and well-versed. Using the framework, we present a case study where robot performs manipulation actions in a kitchen environment, bridging visual perception with contextual semantics using the generated dynamic knowledge graphs.

Citations (2)

Summary

We haven't generated a summary for this paper yet.