Disentangling CLIP for Multi-Object Perception (2502.02977v3)
Abstract: Vision-LLMs like CLIP excel at recognizing the single, prominent object in a scene. However, they struggle in complex scenes containing multiple objects. We identify a fundamental reason behind this limitation: VLMs features space exhibits significant semantic entanglement, where features of one class contain substantial information about other unrelated classes, a phenomenon we term mutual feature information (MFI). This entanglement becomes evident during class-specific queries, as unrelated objects are activated alongside the queried class. To address this limitation, we propose DCLIP, a framework that disentangles CLIP features using two complementary objectives: a novel MFI Loss that orthogonalizes the text (class) features to reduce inter-class similarity, and the Asymmetric Loss (ASL) that aligns image features with the disentangled text features. Our experiment demonstrates that DCLIP reduces inter-class feature similarity by 30\% compared to CLIP, leading to significant performance gains on multi-label recognition (MLR) and zero-shot semantic segmentation (ZS3). In MLR, DCLIP outperforms SOTA approaches on VOC2007 and COCO-14 while using 75\% fewer parameters, and surpasses SOTA ZS3 methods by 3.4 mIoU on VOC2012 and 2.8 mIoU on COCO-17. These results establish feature disentanglement as a critical factor for effective multi-object perception in vision-LLMs.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.