Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Watch-n-Patch: Unsupervised Learning of Actions and Relations (1603.03541v1)

Published 11 Mar 2016 in cs.CV, cs.LG, and cs.RO

Abstract: There is a large variation in the activities that humans perform in their everyday lives. We consider modeling these composite human activities which comprises multiple basic level actions in a completely unsupervised setting. Our model learns high-level co-occurrence and temporal relations between the actions. We consider the video as a sequence of short-term action clips, which contains human-words and object-words. An activity is about a set of action-topics and object-topics indicating which actions are present and which objects are interacting with. We then propose a new probabilistic model relating the words and the topics. It allows us to model long-range action relations that commonly exist in the composite activities, which is challenging in previous works. We apply our model to the unsupervised action segmentation and clustering, and to a novel application that detects forgotten actions, which we call action patching. For evaluation, we contribute a new challenging RGB-D activity video dataset recorded by the new Kinect v2, which contains several human daily activities as compositions of multiple actions interacting with different objects. Moreover, we develop a robotic system that watches people and reminds people by applying our action patching algorithm. Our robotic setup can be easily deployed on any assistive robot.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chenxia Wu (7 papers)
  2. Jiemi Zhang (3 papers)
  3. Ozan Sener (28 papers)
  4. Bart Selman (33 papers)
  5. Silvio Savarese (200 papers)
  6. Ashutosh Saxena (43 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.