Tag2Text: Guiding Vision-Language Model via Image Tagging (2303.05657v3)
Abstract: This paper presents Tag2Text, a vision language pre-training (VLP) framework, which introduces image tagging into vision-LLMs to guide the learning of visual-linguistic features. In contrast to prior works which utilize object tags either manually labeled or automatically detected with an off-the-shelf detector with limited performance, our approach explicitly learns an image tagger using tags parsed from image-paired text and thus provides a strong semantic guidance to vision-LLMs. In this way, Tag2Text can utilize large-scale annotation-free image tags in accordance with image-text pairs, and provides more diverse tag categories beyond objects. As a result, Tag2Text demonstrates the ability of a foundational image tagging model, with superior zero-shot performance even comparable to fully supervised models. Moreover, by leveraging the tagging guidance, Tag2Text effectively enhances the performance of vision-LLMs on both generation-based and alignment-based tasks. Across a wide range of downstream benchmarks, Tag2Text achieves state-of-the-art results with similar model sizes and data scales, demonstrating the efficacy of the proposed tagging guidance. Code, demo and pre-trained models are available at https://github.com/xinyu1205/recognize-anything.
- Xinyu Huang (75 papers)
- Youcai Zhang (44 papers)
- Jinyu Ma (2 papers)
- Weiwei Tian (5 papers)
- Rui Feng (67 papers)
- Yuejie Zhang (31 papers)
- Yaqian Li (17 papers)
- Yandong Guo (78 papers)
- Lei Zhang (1689 papers)