Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Where Should I Spend My FLOPS? Efficiency Evaluations of Visual Pre-training Methods (2209.15589v4)

Published 30 Sep 2022 in cs.CV and cs.LG

Abstract: Self-supervised methods have achieved remarkable success in transfer learning, often achieving the same or better accuracy than supervised pre-training. Most prior work has done so by increasing pre-training computation by adding complex data augmentation, multiple views, or lengthy training schedules. In this work, we investigate a related, but orthogonal question: given a fixed FLOP budget, what are the best datasets, models, and (self-)supervised training methods for obtaining high accuracy on representative visual tasks? Given the availability of large datasets, this setting is often more relevant for both academic and industry labs alike. We examine five large-scale datasets (JFT-300M, ALIGN, ImageNet-1K, ImageNet-21K, and COCO) and six pre-training methods (CLIP, DINO, SimCLR, BYOL, Masked Autoencoding, and supervised). In a like-for-like fashion, we characterize their FLOP and CO$_2$ footprints, relative to their accuracy when transferred to a canonical image segmentation task. Our analysis reveals strong disparities in the computational efficiency of pre-training methods and their dependence on dataset quality. In particular, our results call into question the commonly-held assumption that self-supervised methods inherently scale to large, uncurated data. We therefore advocate for (1) paying closer attention to dataset curation and (2) reporting of accuracies in context of the total computational cost.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Skanda Koppula (23 papers)
  2. Yazhe Li (17 papers)
  3. Evan Shelhamer (33 papers)
  4. Andrew Jaegle (26 papers)
  5. Nikhil Parthasarathy (10 papers)
  6. Relja Arandjelovic (22 papers)
  7. João Carreira (49 papers)
  8. Olivier Hénaff (7 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.