Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking ImageNet Pre-training (1811.08883v1)

Published 21 Nov 2018 in cs.CV

Abstract: We report competitive results on object detection and instance segmentation on the COCO dataset using standard models trained from random initialization. The results are no worse than their ImageNet pre-training counterparts even when using the hyper-parameters of the baseline system (Mask R-CNN) that were optimized for fine-tuning pre-trained models, with the sole exception of increasing the number of training iterations so the randomly initialized models may converge. Training from random initialization is surprisingly robust; our results hold even when: (i) using only 10% of the training data, (ii) for deeper and wider models, and (iii) for multiple tasks and metrics. Experiments show that ImageNet pre-training speeds up convergence early in training, but does not necessarily provide regularization or improve final target task accuracy. To push the envelope we demonstrate 50.9 AP on COCO object detection without using any external data---a result on par with the top COCO 2017 competition results that used ImageNet pre-training. These observations challenge the conventional wisdom of ImageNet pre-training for dependent tasks and we expect these discoveries will encourage people to rethink the current de facto paradigm of `pre-training and fine-tuning' in computer vision.

Citations (1,037)

Summary

  • The paper demonstrates that models trained from random initialization can match or outperform ImageNet pre-trained models on COCO, achieving up to 42.7 AP with ResNet-101.
  • It reveals that techniques like Group Normalization and proper weight initialization enable competitive performance without the need for extensive pre-training.
  • The study indicates that while pre-training accelerates early convergence, prolonged training on target data can effectively eliminate its advantage.

Rethinking ImageNet Pre-training

The paper "Rethinking ImageNet Pre-training" by Kaiming He, Ross Girshick, and Piotr Doll challenges the conventional paradigm in computer vision where models are pre-trained on large-scale datasets like ImageNet and then fine-tuned on specific target tasks. The paper presents empirical evidence suggesting that training models from random initialization can achieve performance comparable to (and occasionally better than) models that utilize ImageNet pre-training, given that adequate data and computational resources are available.

Summary of Findings

The paper's experiments focus primarily on the COCO dataset for object detection and instance segmentation tasks, using the Mask R-CNN framework with a variety of backbones including ResNet, ResNeXt, and VGG. Key observations include:

  1. Competitive Performance Without Pre-training: Models trained from random initialization achieve comparable results to their ImageNet pre-trained counterparts when sufficient training iterations are provided. This includes achieving 41.3 AP for ResNet-50 and 42.7 AP for ResNet-101 on COCO object detection, without utilizing any pre-trained weights.
  2. Enhanced Architectures and Training Techniques: Utilizing techniques like Group Normalization (GN) and Synchronized Batch Normalization (SyncBN), which are robust to smaller batch sizes, enabled training from scratch. The use of a proper initialization normalization further facilitated this process.
  3. Convergence Behaviour: ImageNet pre-training principally accelerates early-stage convergence but does not necessarily contribute to better final accuracy. When models are allowed sufficient training duration, the advantage held by pre-trained models diminishes.
  4. Data Sufficiency: Even when the available training data is reduced to 10% of the COCO dataset (around 10k images), models trained from scratch managed to perform competitively, reaching up to 25.9 AP compared to 26.0 AP of pre-trained models. However, the performance gap widens when the data size is reduced to 3.5k images or fewer, where pre-trained models show a clear advantage.
  5. Task Sensitivity: When the target task involves fine spatial localization, such as in keypoint detection or higher IoU thresholds, models trained from scratch tend to perform better, indicating that the pre-training on classification tasks doesn’t adequately capture localization sensitivities.

Implications of the Research

The key implications of this paper are as follows:

  • Future Data Collection:

In scenarios where large-scale target domain data collection is feasible, investing in this data is likely more beneficial than relying on generic large-scale pre-training datasets like ImageNet. This could lead to better domain-specific performance improvements and alleviate the dependency on pre-trained models.

  • Research Methodologies:

Researchers should reconsider the de facto reliance on ImageNet pre-training, especially in assessing new methodologies for model initialization or self-supervised learning. Evaluating these methodologies without the baseline of ImageNet pre-training might reveal additional insights.

  • Universal Representations:

The notion of learning universal feature representations remains valid but needs careful empirical validation. The diminishing returns of enlarging classification datasets for pre-training highlight that other avenues like fine-tuning large amounts of domain-specific data might be more fruitful.

Future Directions

These findings prompt several potential research directions:

  1. Optimization Schedules: Investigating more sophisticated optimization schedules and normalization strategies that can expedite convergence for models trained from scratch.
  2. Hybrid Approaches: Combining minimal pre-training on smaller or synthetic datasets with extensive fine-tuning on target tasks to balance convergence speed and accuracy.
  3. Task-specific Pre-training: Exploring task-specific pre-training datasets that better capture the nuances of the target tasks, especially those involving detailed spatial or localization requirements.
  4. Automated Hyper-parameter Search: Implementing more robust automated methods for hyper-parameter search that can mitigate the overfitting issues observed with smaller datasets without the need for pre-training.
  5. Continual Learning: Developing frameworks where models can continually learn from new data, focusing on both maintaining performance on old tasks and improving on new tasks without the explicit need for large pre-trained models.

Conclusion

This paper significantly contributes to the ongoing discourse on the role of pre-training in deep learning, particularly in computer vision. By empirically demonstrating that models trained from scratch can match the performance of those relying on extensive pre-training, it prompts a reassessment of long-standing assumptions. As the field moves forward, these insights will likely catalyze more nuanced, domain-specific, and efficient training methodologies that leverage the inherent capacities of deep neural networks effectively.

Youtube Logo Streamline Icon: https://streamlinehq.com