Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Addressing Feature Suppression in Unsupervised Visual Representations (2012.09962v5)

Published 17 Dec 2020 in cs.LG and cs.CV

Abstract: Contrastive learning is one of the fastest growing research areas in machine learning due to its ability to learn useful representations without labeled data. However, contrastive learning is susceptible to feature suppression, i.e., it may discard important information relevant to the task of interest, and learn irrelevant features. Past work has addressed this limitation via handcrafted data augmentations that eliminate irrelevant information. This approach however does not work across all datasets and tasks. Further, data augmentations fail in addressing feature suppression in multi-attribute classification when one attribute can suppress features relevant to other attributes. In this paper, we analyze the objective function of contrastive learning and formally prove that it is vulnerable to feature suppression. We then present predictive contrastive learning (PCL), a framework for learning unsupervised representations that are robust to feature suppression. The key idea is to force the learned representation to predict the input, and hence prevent it from discarding important information. Extensive experiments verify that PCL is robust to feature suppression and outperforms state-of-the-art contrastive learning methods on a variety of datasets and tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Tianhong Li (21 papers)
  2. Lijie Fan (19 papers)
  3. Yuan Yuan (234 papers)
  4. Hao He (99 papers)
  5. Yonglong Tian (32 papers)
  6. Rogerio Feris (105 papers)
  7. Piotr Indyk (66 papers)
  8. Dina Katabi (37 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.