Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Grafting Pre-trained Models for Multimodal Headline Generation (2211.07210v1)

Published 14 Nov 2022 in cs.CV and cs.AI

Abstract: Multimodal headline utilizes both video frames and transcripts to generate the natural language title of the videos. Due to a lack of large-scale, manually annotated data, the task of annotating grounded headlines for video is labor intensive and impractical. Previous researches on pre-trained LLMs and video-LLMs have achieved significant progress in related downstream tasks. However, none of them can be directly applied to multimodal headline architecture where we need both multimodal encoder and sentence decoder. A major challenge in simply gluing LLM and video-LLM is the modality balance, which is aimed at combining visual-language complementary abilities. In this paper, we propose a novel approach to graft the video encoder from the pre-trained video-LLM on the generative pre-trained LLM. We also present a consensus fusion mechanism for the integration of different components, via inter/intra modality relation. Empirically, experiments show that the grafted model achieves strong results on a brand-new dataset collected from real-world applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Lingfeng Qiao (8 papers)
  2. Chen Wu (169 papers)
  3. Ye Liu (153 papers)
  4. Haoyuan Peng (3 papers)
  5. Di Yin (26 papers)
  6. Bo Ren (60 papers)
Citations (3)