Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Contrastive Learning Requires Incorporating Inductive Biases (2202.14037v1)

Published 28 Feb 2022 in cs.LG and cs.AI

Abstract: Contrastive learning is a popular form of self-supervised learning that encourages augmentations (views) of the same input to have more similar representations compared to augmentations of different inputs. Recent attempts to theoretically explain the success of contrastive learning on downstream classification tasks prove guarantees depending on properties of {\em augmentations} and the value of {\em contrastive loss} of representations. We demonstrate that such analyses, that ignore {\em inductive biases} of the function class and training algorithm, cannot adequately explain the success of contrastive learning, even {\em provably} leading to vacuous guarantees in some settings. Extensive experiments on image and text domains highlight the ubiquity of this problem -- different function classes and algorithms behave very differently on downstream tasks, despite having the same augmentations and contrastive losses. Theoretical analysis is presented for the class of linear representations, where incorporating inductive biases of the function class allows contrastive learning to work with less stringent conditions compared to prior analyses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Nikunj Saunshi (23 papers)
  2. Jordan Ash (2 papers)
  3. Surbhi Goel (44 papers)
  4. Dipendra Misra (34 papers)
  5. Cyril Zhang (34 papers)
  6. Sanjeev Arora (93 papers)
  7. Sham Kakade (84 papers)
  8. Akshay Krishnamurthy (92 papers)
Citations (99)

Summary

We haven't generated a summary for this paper yet.