Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation (2210.12649v1)

Published 23 Oct 2022 in cs.CV and cs.RO

Abstract: Although human action anticipation is a task which is inherently multi-modal, state-of-the-art methods on well known action anticipation datasets leverage this data by applying ensemble methods and averaging scores of unimodal anticipation networks. In this work we introduce transformer based modality fusion techniques, which unify multi-modal data at an early stage. Our Anticipative Feature Fusion Transformer (AFFT) proves to be superior to popular score fusion approaches and presents state-of-the-art results outperforming previous methods on EpicKitchens-100 and EGTEA Gaze+. Our model is easily extensible and allows for adding new modalities without architectural changes. Consequently, we extracted audio features on EpicKitchens-100 which we add to the set of commonly used features in the community.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zeyun Zhong (7 papers)
  2. David Schneider (25 papers)
  3. Michael Voit (35 papers)
  4. Rainer Stiefelhagen (155 papers)
  5. Jürgen Beyerer (40 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.