Fine-Tuning Multimodal LLMs for High-Quality Image-Text Data Filtering
The performance of Vision-LLMs (VLMs) and Text-to-Image generation models largely depends on the quality of the image-text data they are trained on. However, web-crawled image-text data often contain noise, such as low-quality captions or images that do not match the corresponding text, creating a pressing need for effective data filtering techniques. In this regard, we introduce a novel approach that leverages finely-tuned Multimodal LLMs (MLMs) as data filters to select high-quality image-text pairs for VLM training.
Multimodal LLMs as Data Filters
Contrary to CLIPScore, which utilizes the CLIP model to estimate the cosine similarity between image and text embeddings for data quality assessment, our method integrates advancements in MLMs for filtering. Our fine-tuned MLM filters can generate precise and comprehensive quality scores, outperforming CLIPScore in identifying high-quality data that improves VLM performance.
Constructing High-Quality Instruction Data
To enable MLMs to accurately generate quality scores, we focus on fine-tuning them on specific quality scoring tasks. To construct the required instruction data for these tasks, we leverage proprietary models like GPT-4 and GPT-4V, combined with state-of-the-art image captioning models such as LLaVA or ShareGPT4V, for creating detailed text descriptions from images. This approach aids in evaluating image-text pairs based on various quality metrics, including Image-Text Matching (ITM), Object Detail FulfiLLMent (ODF), Caption Text Quality (CTQ), and Semantic Understanding (SU).
Fine-Tuning MLMs for Data Filtering
Through comprehensive ablation studies, we optimized the fine-tuning process for MLMs on multimodal instruction data tailored for scoring tasks. By integrating instructions on scoring tasks with a mixture of instructions from other multimodal tasks, we ensure a diverse and rich training dataset. Our fine-tuned MLMs are instructionally tuned on this mixed dataset, enhancing their ability to function as effective data filters.
Evaluation on DataComp Benchmark
We evaluated our MLM filters using the DataComp benchmark, which involves pre-training VLMs on filtered datasets and assessing their performance across a suite of downstream tasks. The results demonstrate significant improvements over existing data filtering techniques, including CLIPScore, illustrating the efficacy of our proposed MLM filters in selecting high-quality image-text data for training VLMs.
Conclusion and Future Directions
Our work represents a significant step forward in the field of data filtering for VLM training. By harnessing the power of fine-tuned MLMs, we offer a novel and effective solution for selecting high-quality, comprehensive image-text pairs. The success of our MLM filters on the DataComp benchmark highlights their potential as superior alternatives to existing data filtering methods. As the field continues to evolve, further research is encouraged to explore and expand upon the capabilities of MLMs in data quality assessment and filtering tasks.
The capability of our MLM filters to accurately evaluate the quality of image-text data from various perspectives and improve the performance of VLMs suggests a promising direction for future research in enhancing the robustness and effectiveness of pre-trained models.