Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CapsFusion: Rethinking Image-Text Data at Scale (2310.20550v3)

Published 31 Oct 2023 in cs.CV, cs.AI, cs.CL, and cs.LG

Abstract: Large multimodal models demonstrate remarkable generalist ability to perform diverse multimodal tasks in a zero-shot manner. Large-scale web-based image-text pairs contribute fundamentally to this success, but suffer from excessive noise. Recent studies use alternative captions synthesized by captioning models and have achieved notable benchmark performance. However, our experiments reveal significant Scalability Deficiency and World Knowledge Loss issues in models trained with synthetic captions, which have been largely obscured by their initial benchmark success. Upon closer examination, we identify the root cause as the overly-simplified language structure and lack of knowledge details in existing synthetic captions. To provide higher-quality and more scalable multimodal pretraining data, we propose CapsFusion, an advanced framework that leverages LLMs to consolidate and refine information from both web-based image-text pairs and synthetic captions. Extensive experiments show that CapsFusion captions exhibit remarkable all-round superiority over existing captions in terms of model performance (e.g., 18.8 and 18.3 improvements in CIDEr score on COCO and NoCaps), sample efficiency (requiring 11-16 times less computation than baselines), world knowledge depth, and scalability. These effectiveness, efficiency and scalability advantages position CapsFusion as a promising candidate for future scaling of LMM training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Qiying Yu (13 papers)
  2. Quan Sun (31 papers)
  3. Xiaosong Zhang (29 papers)
  4. Yufeng Cui (12 papers)
  5. Fan Zhang (685 papers)
  6. Yue Cao (147 papers)
  7. Xinlong Wang (56 papers)
  8. Jingjing Liu (139 papers)
Citations (36)
X Twitter Logo Streamline Icon: https://streamlinehq.com