Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Big Transfer (BiT): General Visual Representation Learning (1912.11370v3)

Published 24 Dec 2019 in cs.CV and cs.LG

Abstract: Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Alexander Kolesnikov (44 papers)
  2. Lucas Beyer (46 papers)
  3. Xiaohua Zhai (51 papers)
  4. Joan Puigcerver (20 papers)
  5. Jessica Yung (5 papers)
  6. Sylvain Gelly (43 papers)
  7. Neil Houlsby (62 papers)
Citations (1,124)

Summary

  • The paper demonstrates that large-scale supervised pre-training with BiT significantly enhances performance on visual tasks even in low-data regimes.
  • It introduces the BiT-HyperRule fine-tuning strategy, which streamlines hyperparameter tuning while adapting models across over 20 diverse tasks.
  • Empirical results show state-of-the-art accuracy, including 87.5% top-1 on ILSVRC-2012 and impressive few-shot performance on benchmarks like CIFAR-10.

Big Transfer (BiT): General Visual Representation Learning

The paper "Big Transfer (BiT): General Visual Representation Learning" presents an insightful paper into the efficacy of pre-trained visual representations across a diverse array of downstream tasks. Conducted by a team at Google Research, the work revisits the paradigm of pre-training on large supervised datasets followed by fine-tuning on target tasks, focusing on scalability and generalizability.

Core Contributions

The paper identifies key contributions in both methodologies and empirical results:

  1. Scalability of Pre-Training:
    • BiT models are pre-trained on three varying scales of datasets: ILSVRC-2012 (1.28M images), ImageNet-21k (14M images), and JFT-300M (300M images).
    • The largest model, BiT-L, pre-trained on JFT-300M, achieves robust performance across a comprehensive set of 20+ downstream tasks, even in low-data regimes.
  2. Performance Evaluation:
    • BiT demonstrated strong numerical results, achieving 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19-task VTAB benchmark.
    • Particularly noteworthy is BiT's performance in few-shot scenarios: 76.8% accuracy on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class.
  3. Transfer Protocol:
    • The paper proposes a simple fine-tuning heuristic, BiT-HyperRule, which effectively generalizes across tasks without the need for extensive hyperparameter tuning.
    • This approach significantly reduces the computational cost for practitioners, making the pre-trained models more accessible for various applications.

Detailed Methodology

The authors stress the importance of a scalable pre-training approach. They explore how the combination of larger datasets and architectures lead to better transfer learning outcomes, elucidating the relationship between computational budget, dataset size, and model architecture.

Upstream Pre-Training

The pre-training phase employs ResNet architectures with Group Normalization (GN) and Weight Standardization (WS), put forth as substitutes to the widely used Batch Normalization (BN). The rationale is GN/WS's superior performance in the large-batch small-device context and better transfer capability across diverse tasks. This choice alleviates the common pitfalls in BN, especially the need for inter-device synchronization and running statistics updates, which can be detrimental to transfer learning.

Downstream Fine-Tuning

BiT-HyperRule simplifies downstream fine-tuning by leveraging three key hyperparameters adjusted per task: training schedule length, resolution, and MixUp regularization. This heuristic enables efficient and effective adaptation of pre-trained models to downstream tasks of varying sizes and complexities.

Empirical Analysis

The rigorous experimental setup provides a comprehensive evaluation across various benchmarks:

  • Standard Vision Benchmarks: BiT-L attains state-of-the-art results on ILSVRC-2012, CIFAR-10/100, Oxford-IIIT Pet, and Oxford Flowers.
  • Few-Shot Learning: BiT-L demonstrates unprecedented performance with extremely limited labeled data, outperforming existing semi-supervised approaches.
  • VTAB-1k Benchmark: BiT-L excels in specialized tasks involving natural, structured, and specialized imaging.

Additionally, robustness tests on ObjectNet and out-of-context images confirm BiT's high adaptability and precision even in unpredictable real-world scenarios.

Implications and Future Directions

The implications of BiT models are multifold:

  • Practical Applications: BiT's robust pre-trained models necessitate minimal tuning, facilitating their application in diverse visual tasks without substantial computational overhead.
  • Versatility: The ability of BiT to generalize across datasets with varying data regimes underscores its versatility, rendering it suitable for both high- and low-resource settings.
  • Theoretical Insights: The paper underscores the value of scale—both in terms of datasets and model architectures—in achieving superior transfer learning performance.

Future work could investigate:

  1. Further Scaling: Exploring larger datasets and more sophisticated architectures could unlock additional performance gains.
  2. Fine-Tuning Heuristics: Refining and potentially automating optimal fine-tuning strategies per task could streamline the transfer learning process.
  3. Broader Applicability: Extending BiT's methodologies to other domains beyond visual representation, such as language and multimodal tasks, could yield similarly promising results.

In summary, the Big Transfer (BiT) approach offers a methodologically sound and empirically validated strategy for harnessing large-scale pre-training to achieve exceptional performance across a wide spectrum of visual tasks, presenting a valuable asset for the research community and practical applications alike.

Youtube Logo Streamline Icon: https://streamlinehq.com