Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective (2306.13092v3)

Published 22 Jun 2023 in cs.CV, cs.AI, and cs.LG

Abstract: We present a new dataset condensation framework termed Squeeze, Recover and Relabel (SRe$2$L) that decouples the bilevel optimization of model and synthetic data during training, to handle varying scales of datasets, model architectures and image resolutions for efficient dataset condensation. The proposed method demonstrates flexibility across diverse dataset scales and exhibits multiple advantages in terms of arbitrary resolutions of synthesized images, low training cost and memory consumption with high-resolution synthesis, and the ability to scale up to arbitrary evaluation network architectures. Extensive experiments are conducted on Tiny-ImageNet and full ImageNet-1K datasets. Under 50 IPC, our approach achieves the highest 42.5% and 60.8% validation accuracy on Tiny-ImageNet and ImageNet-1K, outperforming all previous state-of-the-art methods by margins of 14.5% and 32.9%, respectively. Our approach also surpasses MTT in terms of speed by approximately 52$\times$ (ConvNet-4) and 16$\times$ (ResNet-18) faster with less memory consumption of 11.6$\times$ and 6.4$\times$ during data synthesis. Our code and condensed datasets of 50, 200 IPC with 4K recovery budget are available at https://github.com/VILA-Lab/SRe2L.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Zeyuan Yin (7 papers)
  2. Eric Xing (127 papers)
  3. Zhiqiang Shen (172 papers)
Citations (48)

Summary

Overview of Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale

The paper presents a comprehensive exploration of dataset condensation through the novel framework termed Squeeze, Recover, and Relabel (SRe²L). This innovative approach fundamentally diverges from erstwhile methodologies by decoupling the condensation process to realize significant scalability and efficiency improvements when handling large-scale datasets, such as ImageNet.

Core Concept and Methodology

The pivotal breakthrough introduced by SRe²L is its bifurcation of the data condensation process into three distinct stages: Squeeze, Recover, and Relabel. This structured division not only alleviates the computational bottlenecks traditionally associated with bilevel optimization but also enhances the feasibility of using complex, high-resolution data within dataset condensation. Here is a succinct breakdown of these stages:

  • Squeeze: This initial phase aims to distill critical information from the extensive dataset into a neural network. Unlike traditional bilevel models which jointly optimize model parameters and synthetic data, SRe²L isolates the task of embedding substantial dataset information into a network, facilitating a separation of model training from synthetic data generation.
  • Recover: In this phase, the information encapsulated during the Squeeze stage is reconstructed into synthetic data. This process employs novel alignment strategies leveraging Batch Normalization (BN) statistics, offering a refined approach over feature-matching techniques used by prior methods. This alignment ensures the synthesized data retains original dataset characteristics with minimized computational workload.
  • Relabel: The final phase corrects and instills soft labels onto the synthetic dataset generated during recovery. This phase ensures that the synthesized data is represented accurately and promotes enhanced performance in downstream tasks through effective label alignment.

Empirical Evaluation and Results

Through extensive experiments on the Tiny-ImageNet and full ImageNet-1K datasets, SRe²L demonstrates its superior efficacy. For context, under 50 Images Per Class (IPC), the framework achieves validation accuracies of 60.8% on ImageNet-1K—profoundly surpassing existing state-of-the-art methods. Importantly, these results are achieved while claiming drastically lower training and memory demands. The paper cites training speed-ups of approximately 52× with ConvNet-4 and 16× with ResNet-18 architectures when compared to previous methods like MTT.

Implications and Future Directions

Practical Implications: The SRe²L framework bears significant implications for real-world applications necessitating large-scale data handling. By effectively reducing computational costs, it makes otherwise exasperating tasks of training and data processing more feasible, particularly in resource-constrained environments.

Theoretical Implications: The framework challenges existing paradigms in data condensation by demonstrating the benefits of decoupling model and data optimization. This novel perspective underscores the potential for further explorations into decoupled and modular architectures in artificial intelligence.

Future Directions: The paper hints at expanding the applications of SRe²L beyond computer vision, suggesting potential extensions into language and speech modalities. Moreover, future investigations could aim at diminishing the remaining performance gap between condensed datasets vis-à-vis their complete counterparts, potentially enabling the condensed datasets to fully substitute original datasets.

Conclusion

The Squeeze, Recover, and Relabel framework delineated in this paper marks a substantial advancement in the field of dataset condensation. By detaching traditional bilevel optimization into a strategically decoupled methodology, it not only sets a new benchmark for effectiveness and efficiency in managing large-scale data but also opens new avenues for further research and practical applications in AI. Its successful application to full ImageNet-1K at standard resolutions exemplifies its scalability and robustness—attributes critically necessary for advancing dataset synthesis in evolving AI research landscapes.