Dataset Distillation via Adversarial Prediction Matching (2312.08912v1)
Abstract: Dataset distillation is the technique of synthesizing smaller condensed datasets from large original datasets while retaining necessary information to persist the effect. In this paper, we approach the dataset distillation problem from a novel perspective: we regard minimizing the prediction discrepancy on the real data distribution between models, which are respectively trained on the large original dataset and on the small distilled dataset, as a conduit for condensing information from the raw data into the distilled version. An adversarial framework is proposed to solve the problem efficiently. In contrast to existing distillation methods involving nested optimization or long-range gradient unrolling, our approach hinges on single-level optimization. This ensures the memory efficiency of our method and provides a flexible tradeoff between time and memory budgets, allowing us to distil ImageNet-1K using a minimum of only 6.5GB of GPU memory. Under the optimal tradeoff strategy, it requires only 2.5$\times$ less memory and 5$\times$ less runtime compared to the state-of-the-art. Empirically, our method can produce synthetic datasets just 10% the size of the original, yet achieve, on average, 94% of the test accuracy of models trained on the full original datasets including ImageNet-1K, significantly surpassing state-of-the-art. Additionally, extensive tests reveal that our distilled datasets excel in cross-architecture generalization capabilities.
- Online continual learning with maximal interfered retrieval. In NeurIPS, pp. 11849–11860, 2019a.
- Gradient based sample selection for online continual learning. In NeurIPS, pp. 11816–11825, 2019b.
- Meta-learning with adaptive hyperparameters. In NeurIPS, 2020.
- Algorithms for hyper-parameter optimization. In NIPS, pp. 2546–2554, 2011.
- Flexible dataset distillation: Learn labels instead of images. CoRR, abs/2006.08572, 2020.
- Dataset distillation by matching training trajectories. In CVPR, pp. 10708–10717. IEEE, 2022.
- Generalizing dataset distillation via deep generative prior. In CVPR, pp. 3739–3748. IEEE, 2023.
- Super-samples from kernel herding. In UAI, pp. 109–116. AUAI Press, 2010.
- DC-BENCH: dataset condensation benchmark. In NeurIPS, 2022.
- Scaling up dataset distillation to imagenet-1k with constant memory. In ICML, volume 202 of Proceedings of Machine Learning Research, pp. 6565–6590. PMLR, 2023.
- Xuanyi Dong and Yi Yang. Nas-bench-201: Extending the scope of reproducible neural architecture search. In ICLR. OpenReview.net, 2020.
- An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR. OpenReview.net, 2021.
- Minimizing the accumulated trajectory error to improve dataset distillation. In CVPR, pp. 3749–3758. IEEE, 2023.
- Data-free adversarial distillation. CoRR, abs/1912.11006, 2019.
- Role-wise data augmentation for knowledge distillation. 2020.
- Dynamic few-shot visual learning without forgetting. In CVPR, pp. 4367–4375. Computer Vision Foundation / IEEE Computer Society, 2018.
- Deep residual learning for image recognition. In CVPR, pp. 770–778. IEEE Computer Society, 2016.
- Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015.
- Auto-keras: An efficient neural architecture search system. In KDD, pp. 1946–1956. ACM, 2019.
- Dataset condensation via efficient synthetic-data parameterization. In ICML, volume 162 of Proceedings of Machine Learning Research, pp. 11102–11118. PMLR, 2022.
- Learning multiple layers of features from tiny images. https://www.cs.toronto.edu/~kriz/cifar.html, 2009. Accessed: March 1, 2023.
- Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N, 7(7):3, 2015.
- Dataset distillation with convexified implicit gradients. In ICML, volume 202 of Proceedings of Machine Learning Research, pp. 22649–22674. PMLR, 2023.
- Dataset meta-learning from kernel ridge-regression. In ICLR. OpenReview.net, 2021.
- Efficient neural architecture search via parameter sharing. CoRR, abs/1802.03268, 2018.
- icarl: Incremental classifier and representation learning. In CVPR, pp. 5533–5542. IEEE Computer Society, 2017.
- Imagenet large scale visual recognition challenge. Int. J. Comput. Vis., 115(3):211–252, 2015.
- Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
- An empirical study of example forgetting during deep neural network learning. In ICLR (Poster). OpenReview.net, 2019.
- CAFE: learning to condense dataset by aligning features. In CVPR, pp. 12186–12195. IEEE, 2022.
- Dataset distillation. CoRR, abs/1811.10959, 2018.
- Dreaming to distill: Data-free knowledge transfer via deepinversion. In The IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2020.
- Squeeze, recover and relabel: Dataset condensation at imagenet scale from A new perspective. CoRR, abs/2306.13092, 2023.
- Revisiting knowledge distillation via label smoothing regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3903–3911, 2020.
- Bo Zhao and Hakan Bilen. Dataset condensation with differentiable siamese augmentation. In ICML, volume 139 of Proceedings of Machine Learning Research, pp. 12674–12685. PMLR, 2021a.
- Bo Zhao and Hakan Bilen. Dataset condensation with distribution matching. CoRR, abs/2110.04181, 2021b.
- Dataset condensation with gradient matching. In ICLR. OpenReview.net, 2021.
- Dataset distillation using neural feature regression. In NeurIPS, 2022.
- Mingyang Chen (45 papers)
- Bo Huang (66 papers)
- Junda Lu (4 papers)
- Bing Li (374 papers)
- Yi Wang (1038 papers)
- Minhao Cheng (43 papers)
- Wei Wang (1793 papers)