Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Power of Contrast for Feature Learning: A Theoretical Analysis (2110.02473v4)

Published 6 Oct 2021 in cs.LG and stat.ML

Abstract: Contrastive learning has achieved state-of-the-art performance in various self-supervised learning tasks and even outperforms its supervised counterpart. Despite its empirical success, theoretical understanding of the superiority of contrastive learning is still limited. In this paper, under linear representation settings, (i) we provably show that contrastive learning outperforms the standard autoencoders and generative adversarial networks, two classical generative unsupervised learning methods, for both feature recovery and in-domain downstream tasks; (ii) we also illustrate the impact of labeled data in supervised contrastive learning. This provides theoretical support for recent findings that contrastive learning with labels improves the performance of learned representations in the in-domain downstream task, but it can harm the performance in transfer learning. We verify our theory with numerical experiments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wenlong Ji (12 papers)
  2. Zhun Deng (38 papers)
  3. Ryumei Nakada (12 papers)
  4. James Zou (232 papers)
  5. Linjun Zhang (70 papers)
Citations (44)

Summary

We haven't generated a summary for this paper yet.