Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human Hands as Probes for Interactive Object Understanding (2112.09120v2)

Published 16 Dec 2021 in cs.CV, cs.AI, cs.LG, and cs.RO

Abstract: Interactive object understanding, or what we can do to objects and how is a long-standing goal of computer vision. In this paper, we tackle this problem through observation of human hands in in-the-wild egocentric videos. We demonstrate that observation of what human hands interact with and how can provide both the relevant data and the necessary supervision. Attending to hands, readily localizes and stabilizes active objects for learning and reveals places where interactions with objects occur. Analyzing the hands shows what we can do to objects and how. We apply these basic principles on the EPIC-KITCHENS dataset, and successfully learn state-sensitive features, and object affordances (regions of interaction and afforded grasps), purely by observing hands in egocentric videos.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mohit Goyal (9 papers)
  2. Sahil Modi (4 papers)
  3. Rishabh Goyal (4 papers)
  4. Saurabh Gupta (96 papers)
Citations (42)

Summary

We haven't generated a summary for this paper yet.