CountCLIP -- [Re] Teaching CLIP to Count to Ten (2406.03586v2)
Abstract: Large vision-LLMs (VLMs) are shown to learn rich joint image-text representations enabling high performances in relevant downstream tasks. However, they fail to showcase their quantitative understanding of objects, and they lack good counting-aware representation. This paper conducts a reproducibility study of 'Teaching CLIP to Count to Ten' (Paiss et al., 2023), which presents a method to finetune a CLIP model (Radford et al., 2021) to improve zero-shot counting accuracy in an image while maintaining the performance for zero-shot classification by introducing a counting-contrastive loss term. We improve the model's performance on a smaller subset of their training data with lower computational resources. We verify these claims by reproducing their study with our own code. The implementation can be found at https://github.com/SforAiDl/CountCLIP.
- Harshvardhan Mestha (3 papers)
- Karan Bania (5 papers)
- Shreyas V (4 papers)
- Yash Bhisikar (2 papers)
- Tejas Agrawal (1 paper)