Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contrasting Contrastive Self-Supervised Representation Learning Pipelines (2103.14005v2)

Published 25 Mar 2021 in cs.CV and cs.LG

Abstract: In the past few years, we have witnessed remarkable breakthroughs in self-supervised representation learning. Despite the success and adoption of representations learned through this paradigm, much is yet to be understood about how different training methods and datasets influence performance on downstream tasks. In this paper, we analyze contrastive approaches as one of the most successful and popular variants of self-supervised representation learning. We perform this analysis from the perspective of the training algorithms, pre-training datasets and end tasks. We examine over 700 training experiments including 30 encoders, 4 pre-training datasets and 20 diverse downstream tasks. Our experiments address various questions regarding the performance of self-supervised models compared to their supervised counterparts, current benchmarks used for evaluation, and the effect of the pre-training data on end task performance. Our Visual Representation Benchmark (ViRB) is available at: https://github.com/allenai/virb.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Klemen Kotar (11 papers)
  2. Gabriel Ilharco (26 papers)
  3. Ludwig Schmidt (80 papers)
  4. Kiana Ehsani (31 papers)
  5. Roozbeh Mottaghi (66 papers)
Citations (44)
Github Logo Streamline Icon: https://streamlinehq.com