GIST: Generating Image-Specific Text for Fine-grained Object Classification (2307.11315v2)
Abstract: Recent vision-LLMs outperform vision-only models on many image classification tasks. However, because of the absence of paired text/image descriptions, it remains difficult to fine-tune these models for fine-grained image classification. In this work, we propose a method, GIST, for generating image-specific fine-grained text descriptions from image-only datasets, and show that these text descriptions can be used to improve classification. Key parts of our method include 1. prompting a pretrained LLM with domain-specific prompts to generate diverse fine-grained text descriptions for each class and 2. using a pretrained vision-LLM to match each image to label-preserving text descriptions that capture relevant visual features in the image. We demonstrate the utility of GIST by fine-tuning vision-LLMs on the image-and-generated-text pairs to learn an aligned vision-language representation space for improved classification. We evaluate our learned representation space in full-shot and few-shot scenarios across four diverse fine-grained classification datasets, each from a different domain. Our method achieves an average improvement of $4.1\%$ in accuracy over CLIP linear probes and an average of $1.1\%$ improvement in accuracy over the previous state-of-the-art image-text classification method on the full-shot datasets. Our method achieves similar improvements across few-shot regimes. Code is available at https://github.com/emu1729/GIST.
- Kathleen M. Lewis (8 papers)
- Emily Mu (2 papers)
- Adrian V. Dalca (71 papers)
- John Guttag (42 papers)