DASZL: Dynamic Action Signatures for Zero-shot Learning (1912.03613v3)
Abstract: There are many realistic applications of activity recognition where the set of potential activity descriptions is combinatorially large. This makes end-to-end supervised training of a recognition system impractical as no training set is practically able to encompass the entire label set. In this paper, we present an approach to fine-grained recognition that models activities as compositions of dynamic action signatures. This compositional approach allows us to reframe fine-grained recognition as zero-shot activity recognition, where a detector is composed "on the fly" from simple first-principles state machines supported by deep-learned components. We evaluate our method on the Olympic Sports and UCF101 datasets, where our model establishes a new state of the art under multiple experimental paradigms. We also extend this method to form a unique framework for zero-shot joint segmentation and classification of activities in video and demonstrate the first results in zero-shot decoding of complex action sequences on a widely-used surgical dataset. Lastly, we show that we can use off-the-shelf object detectors to recognize activities in completely de-novo settings with no additional training.
- Tae Soo Kim (20 papers)
- Jonathan D. Jones (2 papers)
- Michael Peven (6 papers)
- Zihao Xiao (18 papers)
- Jin Bai (5 papers)
- Yi Zhang (994 papers)
- Weichao Qiu (33 papers)
- Alan Yuille (294 papers)
- Gregory D. Hager (79 papers)