T2ICount: Enhancing Cross-modal Understanding for Zero-Shot Counting (2502.20625v2)
Abstract: Zero-shot object counting aims to count instances of arbitrary object categories specified by text descriptions. Existing methods typically rely on vision-LLMs like CLIP, but often exhibit limited sensitivity to text prompts. We present T2ICount, a diffusion-based framework that leverages rich prior knowledge and fine-grained visual understanding from pretrained diffusion models. While one-step denoising ensures efficiency, it leads to weakened text sensitivity. To address this challenge, we propose a Hierarchical Semantic Correction Module that progressively refines text-image feature alignment, and a Representational Regional Coherence Loss that provides reliable supervision signals by leveraging the cross-attention maps extracted from the denosing U-Net. Furthermore, we observe that current benchmarks mainly focus on majority objects in images, potentially masking models' text sensitivity. To address this, we contribute a challenging re-annotated subset of FSC147 for better evaluation of text-guided counting ability. Extensive experiments demonstrate that our method achieves superior performance across different benchmarks. Code is available at https://github.com/cha15yq/T2ICount.
- Yifei Qian (7 papers)
- Zhongliang Guo (14 papers)
- Bowen Deng (30 papers)
- Chun Tong Lei (3 papers)
- Shuai Zhao (116 papers)
- Chun Pong Lau (26 papers)
- Xiaopeng Hong (59 papers)
- Michael P. Pound (3 papers)