Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Text-Derived Knowledge Helps Vision: A Simple Cross-modal Distillation for Video-based Action Anticipation (2210.05991v2)

Published 12 Oct 2022 in cs.CV, cs.CL, and cs.LG

Abstract: Anticipating future actions in a video is useful for many autonomous and assistive technologies. Most prior action anticipation work treat this as a vision modality problem, where the models learn the task information primarily from the video features in the action anticipation datasets. However, knowledge about action sequences can also be obtained from external textual data. In this work, we show how knowledge in pretrained LLMs can be adapted and distilled into vision-based action anticipation models. We show that a simple distillation technique can achieve effective knowledge transfer and provide consistent gains on a strong vision model (Anticipative Vision Transformer) for two action anticipation datasets (3.5% relative gain on EGTEA-GAZE+ and 7.2% relative gain on EPIC-KITCHEN 55), giving a new state-of-the-art result.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sayontan Ghosh (5 papers)
  2. Tanvi Aggarwal (1 paper)
  3. Minh Hoai (48 papers)
  4. Niranjan Balasubramanian (53 papers)
Citations (4)