Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Conditional Contrastive Learning for Improving Fairness in Self-Supervised Learning (2106.02866v2)

Published 5 Jun 2021 in cs.LG

Abstract: Contrastive self-supervised learning (SSL) learns an embedding space that maps similar data pairs closer and dissimilar data pairs farther apart. Despite its success, one issue has been overlooked: the fairness aspect of representations learned using contrastive SSL. Without mitigation, contrastive SSL techniques can incorporate sensitive information such as gender or race and cause potentially unfair predictions on downstream tasks. In this paper, we propose a Conditional Contrastive Learning (CCL) approach to improve the fairness of contrastive SSL methods. Our approach samples positive and negative pairs from distributions conditioning on the sensitive attribute, or empirically speaking, sampling positive and negative pairs from the same gender or the same race. We show that our approach provably maximizes the conditional mutual information between the learned representations of the positive pairs, and reduces the effect of the sensitive attribute by taking it as the conditional variable. On seven fairness and vision datasets, we empirically demonstrate that the proposed approach achieves state-of-the-art downstream performances compared to unsupervised baselines and significantly improves the fairness of contrastive SSL models on multiple fairness metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Martin Q. Ma (9 papers)
  2. Yao-Hung Hubert Tsai (41 papers)
  3. Paul Pu Liang (103 papers)
  4. Han Zhao (159 papers)
  5. Kun Zhang (353 papers)
  6. Ruslan Salakhutdinov (248 papers)
  7. Louis-Philippe Morency (123 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.