Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

READ: Recurrent Adapter with Partial Video-Language Alignment for Parameter-Efficient Transfer Learning in Low-Resource Video-Language Modeling (2312.06950v2)

Published 12 Dec 2023 in cs.CV and cs.CL

Abstract: Fully fine-tuning pretrained large-scale transformer models has become a popular paradigm for video-LLMing tasks, such as temporal language grounding and video-language summarization. With a growing number of tasks and limited training data, such full fine-tuning approach leads to costly model storage and unstable training. To overcome these shortcomings, we introduce lightweight adapters to the pre-trained model and only update them at fine-tuning time. However, existing adapters fail to capture intrinsic temporal relations among video frames or textual words. Moreover, they neglect the preservation of critical task-related information that flows from the raw video-language input into the adapter's low-dimensional space. To address these issues, we first propose a novel REcurrent ADapter (READ) that employs recurrent computation to enable temporal modeling capability. Second, we propose Partial Video-Language Alignment (PVLA) objective via the use of partial optimal transport to maintain task-related information flowing into our READ modules. We validate our READ framework through extensive experiments where READ significantly outperforms all existing fine-tuning strategies on multiple low-resource temporal language grounding and video-language summarization benchmarks. The code, model, and data have been made available at https://nguyentthong.github.io/READ.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Thong Nguyen (38 papers)
  2. Xiaobao Wu (43 papers)
  3. Xinshuai Dong (25 papers)
  4. Khoi Le (5 papers)
  5. Zhiyuan Hu (30 papers)
  6. Cong-Duy Nguyen (16 papers)
  7. See-Kiong Ng (103 papers)
  8. Luu Anh Tuan (55 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.