Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
131 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FastDINOv2: Frequency Based Curriculum Learning Improves Robustness and Training Speed (2507.03779v1)

Published 4 Jul 2025 in cs.CV, cs.AI, and cs.LG

Abstract: Large-scale vision foundation models such as DINOv2 boast impressive performances by leveraging massive architectures and training datasets. But numerous scenarios require practitioners to reproduce those pre-training solutions, such as on private data, new modalities, or simply for scientific questioning--which is currently extremely demanding computation-wise. We thus propose a novel pre-training strategy for DINOv2 that simultaneously accelerates convergence--and strengthens robustness to common corruptions as a by-product. Our approach involves a frequency filtering curriculum--low-frequency being seen first--and the Gaussian noise patching augmentation. Applied to a ViT-B/16 backbone trained on ImageNet-1K, while pre-training time and FLOPs are reduced by 1.6x and 2.25x, our method still achieves matching robustness in corruption benchmarks (ImageNet-C) and maintains competitive linear probing performance compared with baseline. This dual benefit of efficiency and robustness makes large-scale self-supervised foundation modeling more attainable, while opening the door to novel exploration around data curriculum and augmentation as means to improve self-supervised learning models robustness. The code is available at https://github.com/KevinZ0217/fast_dinov2

Summary

  • The paper introduces a two-stage frequency-based curriculum that uses low-resolution training and Gaussian noise patching to accelerate convergence and improve robustness.
  • The approach reduces pretraining time by 1.6× and FLOPs by 2.25× on ImageNet-1K while maintaining near-baseline accuracy.
  • The methodology enables robust self-supervised vision models on resource-constrained hardware, broadening access to high-performance pretraining.

FastDINOv2: Frequency-Based Curriculum Learning for Efficient and Robust Vision Model Pretraining

FastDINOv2 introduces a frequency-based curriculum learning strategy for self-supervised pretraining of vision transformers, specifically targeting the DINOv2 framework. The method is motivated by the high computational demands and limited robustness of large-scale vision foundation models, and aims to make robust self-supervised learning more accessible by reducing both training time and resource requirements while maintaining or improving robustness to common corruptions.

Methodology

The core contribution is a two-stage curriculum that leverages the spectral properties of natural images:

  1. Stage 1: Low-Frequency Curriculum For the initial 75% of training epochs, the model is trained exclusively on downsampled images, emphasizing low-frequency content. This is implemented via bicubic downsampling after standard DINOv2 cropping, reducing global crops from 224×224 to 112×112 and local crops from 96×96 to 48×48. This stage accelerates convergence by focusing on coarse, structural features.
  2. Stage 2: Full-Resolution with Gaussian Noise Patching In the final 25% of training, the model transitions to full-resolution images. To counteract the high-frequency bias induced by the curriculum, Gaussian noise patching is introduced: random patches in each image are replaced with Gaussian noise, encouraging invariance to high-frequency perturbations and improving robustness.

A key implementation detail is the use of a constant batch size across both stages to ensure training stability. Positional embeddings are interpolated when transitioning between resolutions, with bicubic interpolation of the first-stage embedding yielding the best results.

Empirical Results

FastDINOv2 demonstrates strong empirical performance across several axes:

  • Training Efficiency:

On ImageNet-1K with a ViT-B/16 backbone, FastDINOv2 reduces pretraining time by 1.6× and FLOPs by 2.25× compared to the DINOv2 baseline, with a corresponding reduction in GPU memory requirements during the low-resolution stage (from 33.5GB to 9.47GB per GPU for batch size 128).

  • Accuracy and Robustness:

Linear probing on ImageNet-1K shows only a minor drop in clean accuracy (from 77.8% to 76.2%), while robustness on ImageNet-C is maintained or slightly improved (56.5% vs. 56.7%). On ImageNet-100, the curriculum achieves baseline accuracy in 20% fewer epochs, and the combined curriculum plus Gaussian patching yields a 6% absolute improvement in corruption accuracy with negligible loss in clean accuracy.

  • Task Generalization:

Instance recognition (Oxford/Paris datasets) and semantic segmentation (ADE20K) tasks confirm that the approach preserves or improves performance on both instance-level and pixel-level tasks, despite the initial focus on low-frequency features.

  • Spectral Analysis:

Fourier heatmaps and Grad-CAM visualizations reveal that FastDINOv2-trained models exhibit reduced error sensitivity in high- and medium-frequency bands and focus more on object contours, indicating a shift in feature utilization.

Analysis and Implications

The paper provides several insights into the interplay between curriculum learning, frequency bias, and robustness:

  • Frequency Bias and Robustness Trade-offs:

The low-frequency curriculum biases the model toward high-frequency features, improving robustness to low-frequency corruptions but increasing vulnerability to high-frequency noise. Gaussian noise patching, a low-frequency-biased augmentation, counterbalances this effect, leading to a more spectrally balanced model.

  • Curriculum and Augmentation Synergy:

Integrating frequency-based curriculum with targeted augmentations (Gaussian noise patching) yields additive benefits, improving robustness across a broad spectrum of corruptions without sacrificing clean accuracy.

  • Resource Accessibility:

The substantial reduction in memory and compute requirements during the majority of training epochs enables pretraining on less expensive hardware, broadening access to robust self-supervised vision models.

Limitations and Future Directions

A primary limitation is the use of a fixed schedule for transitioning between low- and high-resolution training. The optimal transition point may vary with model size, dataset, or other hyperparameters. Future work should explore adaptive scheduling strategies, potentially guided by convergence metrics or validation performance. Additionally, combining this curriculum with other robustness-enhancing techniques, such as adversarial training, remains an open avenue.

Broader Impact and Prospects

FastDINOv2 demonstrates that robustness in self-supervised vision models can be achieved through principled curriculum and augmentation design, rather than relying solely on extreme scale. This has practical implications for reproducibility, accessibility, and deployment of robust vision models in resource-constrained settings. The frequency-based curriculum paradigm may inspire analogous strategies in other domains (e.g., audio, multimodal learning) and inform the development of adaptive, data-driven curricula for efficient and robust representation learning.

The codebase is available at https://github.com/KevinZ0217/fast_dinov2, facilitating further experimentation and adoption.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com