Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decoupled Contrastive Learning for Long-Tailed Recognition (2403.06151v1)

Published 10 Mar 2024 in cs.CV

Abstract: Supervised Contrastive Loss (SCL) is popular in visual representation learning. Given an anchor image, SCL pulls two types of positive samples, i.e., its augmentation and other images from the same class together, while pushes negative images apart to optimize the learned embedding. In the scenario of long-tailed recognition, where the number of samples in each class is imbalanced, treating two types of positive samples equally leads to the biased optimization for intra-category distance. In addition, similarity relationship among negative samples, that are ignored by SCL, also presents meaningful semantic cues. To improve the performance on long-tailed recognition, this paper addresses those two issues of SCL by decoupling the training objective. Specifically, it decouples two types of positives in SCL and optimizes their relations toward different objectives to alleviate the influence of the imbalanced dataset. We further propose a patch-based self distillation to transfer knowledge from head to tail classes to relieve the under-representation of tail classes. It uses patch-based features to mine shared visual patterns among different instances and leverages a self distillation procedure to transfer such knowledge. Experiments on different long-tailed classification benchmarks demonstrate the superiority of our method. For instance, it achieves the 57.7% top-1 accuracy on the ImageNet-LT dataset. Combined with the ensemble-based method, the performance can be further boosted to 59.7%, which substantially outperforms many recent works. The code is available at https://github.com/SY-Xuan/DSCL.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. What is the effect of importance weighting in deep learning? In ICML, 872–881. PMLR.
  2. Unsupervised learning of visual features by contrasting cluster assignments. NeurIPS, 33: 9912–9924.
  3. Improved baselines with momentum contrastive learning. arXiv:2003.04297.
  4. Randaugment: Practical automated data augmentation with a reduced search space. In CVPRW, 702–703.
  5. Parametric contrastive learning. In ICCV, 715–724.
  6. Momentum contrast for unsupervised visual representation learning. In CVPR, 9729–9738.
  7. Mask r-cnn. In ICCV, 2961–2969.
  8. Deep residual learning for image recognition. In CVPR, 770–778.
  9. BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning. arXiv:2203.01522.
  10. The class imbalance problem: A systematic study. Intelligent data analysis, 6(5): 429–449.
  11. Exploring balanced feature spaces for representation learning. In ICLR.
  12. Decoupling representation and classifier for long-tailed recognition. arXiv:1910.09217.
  13. Supervised contrastive learning. arXiv:2004.11362.
  14. Nested Collaborative Learning for Long-Tailed Visual Recognition. In CVPR, 6949–6958.
  15. Targeted Supervised Contrastive Learning for Long-Tailed Recognition. arXiv:2111.13998.
  16. Large-scale long-tailed recognition in an open world. In CVPR, 2537–2546.
  17. Fully convolutional networks for semantic segmentation. In CVPR, 3431–3440.
  18. Curvature-Balanced Feature Manifold Learning for Long-Tailed Classification. In CVPR.
  19. Auto-reid: Searching for a part-aware convnet for person re-identification. In ICCV, 3750–3759.
  20. Balanced meta-softmax for long-tailed visual recognition. NeurIPS, 33: 4175–4186.
  21. Imagenet large scale visual recognition challenge. IJCV, 115(3): 211–252.
  22. Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In ECCV, 480–496.
  23. The inaturalist species classification and detection dataset. In CVPR, 8769–8778.
  24. Attention is all you need. NeurIPS, 30.
  25. Long-tailed recognition by routing diverse distribution-aware experts. arXiv:2010.01809.
  26. Can Semantic Labels Assist Self-Supervised Visual Representation Learning? arXiv:2011.08621.
  27. Aggregated residual transformations for deep neural networks. In CVPR, 1492–1500.
  28. Patch-level representation learning for self-supervised vision transformers. In CVPR, 8354–8363.
  29. Part-based R-CNNs for fine-grained category detection. In ECCV, 834–849. Springer.
  30. Distribution alignment: A unified framework for long-tail visual recognition. In CVPR, 2361–2370.
  31. Patch-level Contrastive Learning via Positional Query for Visual Pre-training. In ICML, 41990–41999. PMLR.
  32. Self-supervised aggregation of diverse experts for test-agnostic long-tailed recognition. NeurIPS, 35: 34077–34090.
  33. Places: A 10 million image database for scene recognition. TPAMI, 40(6): 1452–1464.
  34. Balanced contrastive learning for long-tailed visual recognition. In CVPR, 6908–6917.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Shiyu Xuan (6 papers)
  2. Shiliang Zhang (132 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com