Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LaCViT: A Label-aware Contrastive Fine-tuning Framework for Vision Transformers (2303.18013v3)

Published 31 Mar 2023 in cs.CV and cs.AI

Abstract: Vision Transformers (ViTs) have emerged as popular models in computer vision, demonstrating state-of-the-art performance across various tasks. This success typically follows a two-stage strategy involving pre-training on large-scale datasets using self-supervised signals, such as masked random patches, followed by fine-tuning on task-specific labeled datasets with cross-entropy loss. However, this reliance on cross-entropy loss has been identified as a limiting factor in ViTs, affecting their generalization and transferability to downstream tasks. Addressing this critical challenge, we introduce a novel Label-aware Contrastive Training framework, LaCViT, which significantly enhances the quality of embeddings in ViTs. LaCViT not only addresses the limitations of cross-entropy loss but also facilitates more effective transfer learning across diverse image classification tasks. Our comprehensive experiments on eight standard image classification datasets reveal that LaCViT statistically significantly enhances the performance of three evaluated ViTs by up-to 10.78% under Top-1 Accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zijun Long (11 papers)
  2. Zaiqiao Meng (42 papers)
  3. Gerardo Aragon Camarasa (6 papers)
  4. Richard Mccreadie (19 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.