Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Finetuned Multimodal Language Models Are High-Quality Image-Text Data Filters (2403.02677v1)

Published 5 Mar 2024 in cs.CV and cs.CL
Finetuned Multimodal Language Models Are High-Quality Image-Text Data Filters

Abstract: We propose a novel framework for filtering image-text data by leveraging fine-tuned Multimodal LLMs (MLMs). Our approach outperforms predominant filtering methods (e.g., CLIPScore) via integrating the recent advances in MLMs. We design four distinct yet complementary metrics to holistically measure the quality of image-text data. A new pipeline is established to construct high-quality instruction data for fine-tuning MLMs as data filters. Comparing with CLIPScore, our MLM filters produce more precise and comprehensive scores that directly improve the quality of filtered data and boost the performance of pre-trained models. We achieve significant improvements over CLIPScore on popular foundation models (i.e., CLIP and BLIP2) and various downstream tasks. Our MLM filter can generalize to different models and tasks, and be used as a drop-in replacement for CLIPScore. An additional ablation study is provided to verify our design choices for the MLM filter.

Fine-Tuning Multimodal LLMs for High-Quality Image-Text Data Filtering

The performance of Vision-LLMs (VLMs) and Text-to-Image generation models largely depends on the quality of the image-text data they are trained on. However, web-crawled image-text data often contain noise, such as low-quality captions or images that do not match the corresponding text, creating a pressing need for effective data filtering techniques. In this regard, we introduce a novel approach that leverages finely-tuned Multimodal LLMs (MLMs) as data filters to select high-quality image-text pairs for VLM training.

Multimodal LLMs as Data Filters

Contrary to CLIPScore, which utilizes the CLIP model to estimate the cosine similarity between image and text embeddings for data quality assessment, our method integrates advancements in MLMs for filtering. Our fine-tuned MLM filters can generate precise and comprehensive quality scores, outperforming CLIPScore in identifying high-quality data that improves VLM performance.

Constructing High-Quality Instruction Data

To enable MLMs to accurately generate quality scores, we focus on fine-tuning them on specific quality scoring tasks. To construct the required instruction data for these tasks, we leverage proprietary models like GPT-4 and GPT-4V, combined with state-of-the-art image captioning models such as LLaVA or ShareGPT4V, for creating detailed text descriptions from images. This approach aids in evaluating image-text pairs based on various quality metrics, including Image-Text Matching (ITM), Object Detail FulfiLLMent (ODF), Caption Text Quality (CTQ), and Semantic Understanding (SU).

Fine-Tuning MLMs for Data Filtering

Through comprehensive ablation studies, we optimized the fine-tuning process for MLMs on multimodal instruction data tailored for scoring tasks. By integrating instructions on scoring tasks with a mixture of instructions from other multimodal tasks, we ensure a diverse and rich training dataset. Our fine-tuned MLMs are instructionally tuned on this mixed dataset, enhancing their ability to function as effective data filters.

Evaluation on DataComp Benchmark

We evaluated our MLM filters using the DataComp benchmark, which involves pre-training VLMs on filtered datasets and assessing their performance across a suite of downstream tasks. The results demonstrate significant improvements over existing data filtering techniques, including CLIPScore, illustrating the efficacy of our proposed MLM filters in selecting high-quality image-text data for training VLMs.

Conclusion and Future Directions

Our work represents a significant step forward in the field of data filtering for VLM training. By harnessing the power of fine-tuned MLMs, we offer a novel and effective solution for selecting high-quality, comprehensive image-text pairs. The success of our MLM filters on the DataComp benchmark highlights their potential as superior alternatives to existing data filtering methods. As the field continues to evolve, further research is encouraged to explore and expand upon the capabilities of MLMs in data quality assessment and filtering tasks.

The capability of our MLM filters to accurately evaluate the quality of image-text data from various perspectives and improve the performance of VLMs suggests a promising direction for future research in enhancing the robustness and effectiveness of pre-trained models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716–23736, 2022.
  2. Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf, 2(3), 2023.
  3. Coyo-700m: Image-text pair dataset. https://github.com/kakaobrain/coyo-dataset, 2022.
  4. Sharegpt4v: Improving large multi-modal models with better captions. arXiv preprint arXiv:2311.12793, 2023.
  5. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.
  6. Alpagasus: Training a better alpaca with fewer data. arXiv preprint arXiv:2307.08701, 2023.
  7. e-snli: Natural language inference with natural language explanations. Advances in Neural Information Processing Systems, 31, 2018.
  8. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  9. Instructblip: Towards general-purpose vision-language models with instruction tuning, 2023.
  10. Data filtering networks. arXiv preprint arXiv:2309.17425, 2023.
  11. Datacomp: In search of the next generation of multimodal datasets. arXiv preprint arXiv:2304.14108, 2023.
  12. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904–6913, 2017.
  13. Language is not all you need: Aligning perception with language models. Advances in Neural Information Processing Systems, 36, 2024.
  14. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718, 2021.
  15. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019.
  16. Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, pages 4904–4916. PMLR, 2021.
  17. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744, 2023.
  18. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023.
  19. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023.
  20. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014.
  21. T-mars: Improving visual representations by circumventing text feature learning. arXiv preprint arXiv:2307.03132, 2023.
  22. Cross-task generalization via natural language crowdsourcing instructions. arXiv preprint arXiv:2104.08773, 2021.
  23. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  24. Ocr-vqa: Visual question answering by reading text in images. In 2019 international conference on document analysis and recognition (ICDAR), pages 947–952. IEEE, 2019.
  25. Quality not quantity: On the interaction between dataset design and robustness of clip. Advances in Neural Information Processing Systems, 35:21455–21469, 2022.
  26. OpenAI. Gpt-4v(ision) technical work and authors. 2023.
  27. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
  28. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2020.
  29. Learning transferable visual models from natural language supervision. In ICML, 2021.
  30. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278–25294, 2022.
  31. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556–2565, 2018.
  32. Eva-clip: Improved training techniques for clip at scale. arXiv preprint arXiv:2303.15389, 2023.
  33. Scienceqa: A novel resource for question answering on scholarly articles. International Journal on Digital Libraries, 23(3):289–301, 2022.
  34. ShareGPT. https://sharegpt.com/, 2023.
  35. Textcaps: a dataset for image captioning with reading comprehension. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pages 742–758. Springer, 2020.
  36. Mpnet: Masked and permuted pre-training for language understanding. Advances in Neural Information Processing Systems, 33:16857–16867, 2020.
  37. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021.
  38. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
  39. Mass-producing failures of multimodal systems with language models. arXiv preprint arXiv:2306.12105, 2023.
  40. Eyes wide shut? exploring the visual shortcomings of multimodal llms. arXiv preprint arXiv:2401.06209, 2024.
  41. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  42. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
  43. Visually-augmented language modeling. arXiv preprint arXiv:2205.10178, 2022.
  44. Instructiongpt-4: A 200-instruction paradigm for fine-tuning minigpt-4. arXiv preprint arXiv:2308.12067, 2023.
  45. Cogvlm: Visual expert for pretrained language models, 2023.
  46. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
  47. Demystifying clip data. arXiv preprint arXiv:2309.16671, 2023.
  48. The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421, 9(1):1, 2023.
  49. The devil is in the details: A deep dive into the rabbit hole of data filtering. arXiv preprint arXiv:2309.15954, 2023.
  50. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
  51. Gpt-4v (ision) as a generalist evaluator for vision-language tasks. arXiv preprint arXiv:2311.01361, 2023.
  52. The visual task adaptation benchmark. 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Weizhi Wang (18 papers)
  2. Khalil Mrini (10 papers)
  3. Linjie Yang (48 papers)
  4. Sateesh Kumar (6 papers)
  5. Yu Tian (249 papers)
  6. Xifeng Yan (52 papers)
  7. Heng Wang (136 papers)
Citations (9)