Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are Large-scale Datasets Necessary for Self-Supervised Pre-training? (2112.10740v1)

Published 20 Dec 2021 in cs.CV

Abstract: Pre-training models on large scale datasets, like ImageNet, is a standard practice in computer vision. This paradigm is especially effective for tasks with small training sets, for which high-capacity models tend to overfit. In this work, we consider a self-supervised pre-training scenario that only leverages the target task data. We consider datasets, like Stanford Cars, Sketch or COCO, which are order(s) of magnitude smaller than Imagenet. Our study shows that denoising autoencoders, such as BEiT or a variant that we introduce in this paper, are more robust to the type and size of the pre-training data than popular self-supervised methods trained by comparing image embeddings.We obtain competitive performance compared to ImageNet pre-training on a variety of classification datasets, from different domains. On COCO, when pre-training solely using COCO images, the detection and instance segmentation performance surpasses the supervised ImageNet pre-training in a comparable setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Alaaeldin El-Nouby (21 papers)
  2. Gautier Izacard (17 papers)
  3. Hugo Touvron (22 papers)
  4. Ivan Laptev (99 papers)
  5. Edouard Grave (56 papers)
  6. Hervé Jegou (3 papers)
Citations (142)

Summary

We haven't generated a summary for this paper yet.