Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unified Contrastive Learning in Image-Text-Label Space (2204.03610v1)

Published 7 Apr 2022 in cs.CV, cs.AI, and cs.LG

Abstract: Visual recognition is recently learned via either supervised learning on human-annotated image-label data or language-image contrastive learning with webly-crawled image-text pairs. While supervised learning may result in a more discriminative representation, language-image pretraining shows unprecedented zero-shot recognition capability, largely due to the different properties of data sources and learning objectives. In this work, we introduce a new formulation by combining the two data sources into a common image-text-label space. In this space, we propose a new learning paradigm, called Unified Contrastive Learning (UniCL) with a single learning objective to seamlessly prompt the synergy of two data types. Extensive experiments show that our UniCL is an effective way of learning semantically rich yet discriminative representations, universally for image recognition in zero-shot, linear-probe, fully finetuning and transfer learning scenarios. Particularly, it attains gains up to 9.2% and 14.5% in average on zero-shot recognition benchmarks over the language-image contrastive learning and supervised learning methods, respectively. In linear probe setting, it also boosts the performance over the two methods by 7.3% and 3.4%, respectively. Our study also indicates that UniCL stand-alone is a good learner on pure image-label data, rivaling the supervised learning methods across three image classification datasets and two types of vision backbones, ResNet and Swin Transformer. Code is available at https://github.com/microsoft/UniCL.

Citations (198)

Summary

  • The paper introduces Unified Contrastive Learning (UniCL) that bridges image-label and image-text data into a common space for richer semantic representations.
  • UniCL employs a bidirectional contrastive objective that aligns images with corresponding text and labels, achieving up to 14.5% improvement in zero-shot recognition.
  • The proposed methodology supports end-to-end training and demonstrates superior performance in transfer learning, linear probing, and fine-tuning scenarios.

Unified Contrastive Learning in Image-Text-Label Space

The paper "Unified Contrastive Learning in Image-Text-Label Space" introduces a novel approach to visual recognition by combining supervised and contrastive learning methodologies. The authors propose a learning paradigm called Unified Contrastive Learning (UniCL) which integrates both image-label and image-text data into a shared space, termed the image-text-label space. This approach aims to harness the strengths of both data types to develop semantically rich yet discriminative representations.

Key Contributions

  1. Unified Image-Text-Label Space: The paper presents a new perspective by situating both image-label and image-text data into a common space. This creates a bridge between structured labels and free-form textual descriptions.
  2. Unified Contrastive Learning (UniCL): UniCL is designed as a bidirectional contrastive learning method that simultaneously handles both data types. It leverages the strengths of supervised learning’s discriminative qualities and the semantic richness of image-text data.
  3. Experimental Results: Extensive experiments demonstrate the superiority of UniCL in multiple scenarios, including zero-shot recognition, linear probing, full fine-tuning, and transfer learning. Notably, UniCL achieves up to 9.2% and 14.5% improvements in zero-shot recognition over previous methods relying solely on language-image contrastive learning and supervised learning, respectively.

Methodological Insights

UniCL employs a bidirectional learning objective consisting of image-to-text and text-to-image contrastive losses. This framework maximizes agreement between corresponding image-text instances while leveraging label information to define positive pairs. By combining a visual encoder and a language encoder, the approach enhances the learning process, producing representations with both discriminative power and semantic depth.

Comparisons and Observations

  • Against Cross-Entropy (CE) and SupCon: UniCL demonstrates comparable or superior performance across datasets and architectures such as ResNet and Swin Transformer. Unlike CE, which may overfit, UniCL’s bidirectional alignment offers a regularization effect. Compared to SupCon, which requires two training stages, UniCL provides end-to-end training with language-awareness capabilities.
  • Concept Embedding Visualization: The paper provides qualitative evidence through t-SNE visualizations, showing that UniCL-trained embeddings align more semantically with unseen concepts. This further substantiates the approach's effectiveness in producing meaningful representations.

Implications and Future Directions

The integration of image-label and image-text data within a unified space offers a promising direction for multi-modal representation learning. UniCL's effectiveness in zero-shot and transfer learning scenarios suggests potential for further exploration in various AI applications, such as object detection and VQA, especially when trained at scale with large, diverse datasets.

Future work could explore scaling UniCL with more sophisticated architectures and additional data modalities, potentially enhancing its utility across even broader domains of visual understanding and multi-modal analysis.

By merging traditionally isolated datasets and learning objectives, this work contributes to a more cohesive approach to visual recognition, advancing the field toward more versatile AI systems capable of both discriminative and semantic understanding.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com