Fine-grained Audible Video Description (2303.15616v1)
Abstract: We explore a new task for audio-visual-LLMing called fine-grained audible video description (FAVD). It aims to provide detailed textual descriptions for the given audible videos, including the appearance and spatial locations of each object, the actions of moving objects, and the sounds in videos. Existing visual-LLMing tasks often concentrate on visual cues in videos while undervaluing the language and audio modalities. On the other hand, FAVD requires not only audio-visual-LLMing skills but also paragraph-level language generation abilities. We construct the first fine-grained audible video description benchmark (FAVDBench) to facilitate this research. For each video clip, we first provide a one-sentence summary of the video, ie, the caption, followed by 4-6 sentences describing the visual details and 1-2 audio-related descriptions at the end. The descriptions are provided in both English and Chinese. We create two new metrics for this task: an EntityScore to gauge the completeness of entities in the visual descriptions, and an AudioScore to assess the audio descriptions. As a preliminary approach to this task, we propose an audio-visual-language transformer that extends existing video captioning model with an additional audio branch. We combine the masked LLMing and auto-regressive LLMing losses to optimize our model so that it can produce paragraph-level descriptions. We illustrate the efficiency of our model in audio-visual-LLMing by evaluating it against the proposed benchmark using both conventional captioning metrics and our proposed metrics. We further put our benchmark to the test in video generation models, demonstrating that employing fine-grained video descriptions can create more intricate videos than using captions.
- Xuyang Shen (23 papers)
- Dong Li (429 papers)
- Jinxing Zhou (16 papers)
- Zhen Qin (105 papers)
- Bowen He (10 papers)
- Xiaodong Han (19 papers)
- Aixuan Li (11 papers)
- Yuchao Dai (123 papers)
- Lingpeng Kong (134 papers)
- Meng Wang (1063 papers)
- Yu Qiao (563 papers)
- Yiran Zhong (75 papers)