Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-modal Contrastive Distillation for Instructional Activity Anticipation (2201.06734v1)

Published 18 Jan 2022 in cs.CV

Abstract: In this study, we aim to predict the plausible future action steps given an observation of the past and study the task of instructional activity anticipation. Unlike previous anticipation tasks that aim at action label prediction, our work targets at generating natural language outputs that provide interpretable and accurate descriptions of future action steps. It is a challenging task due to the lack of semantic information extracted from the instructional videos. To overcome this challenge, we propose a novel knowledge distillation framework to exploit the related external textual knowledge to assist the visual anticipation task. However, previous knowledge distillation techniques generally transfer information within the same modality. To bridge the gap between the visual and text modalities during the distillation process, we devise a novel cross-modal contrastive distillation (CCD) scheme, which facilitates knowledge distillation between teacher and student in heterogeneous modalities with the proposed cross-modal distillation loss. We evaluate our method on the Tasty Videos dataset. CCD improves the anticipation performance of the visual-alone student model by a large margin of 40.2% relatively in BLEU4. Our approach also outperforms the state-of-the-art approaches by a large margin.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhengyuan Yang (86 papers)
  2. Jingen Liu (22 papers)
  3. Jing Huang (140 papers)
  4. Xiaodong He (162 papers)
  5. Tao Mei (209 papers)
  6. Chenliang Xu (114 papers)
  7. Jiebo Luo (355 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.