Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 61 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images (2310.13782v1)

Published 20 Oct 2023 in cs.CV

Abstract: Knowledge distillation (KD) has been a popular and effective method for model compression. One important assumption of KD is that the original training dataset is always available. However, this is not always the case due to privacy concerns and more. In recent years, "data-free" KD has emerged as a growing research topic which focuses on the scenario of performing KD when no data is provided. Many methods rely on a generator network to synthesize examples for distillation (which can be difficult to train) and can frequently produce images that are visually similar to the original dataset, which raises questions surrounding whether privacy is completely preserved. In this work, we propose a new approach to data-free KD that utilizes unnatural OpenGL images, combined with large amounts of data augmentation and adversarial attacks, to train a student network. We demonstrate that our approach achieves state-of-the-art results for a variety of datasets/networks and is more stable than existing generator-based data-free KD methods. Source code will be available in the future.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (66)
  1. Adversarial Example Detection using Latent Neighborhood Graph. In IEEE/CVF International Conference on Computer Vision, 2021.
  2. Procedural Image Programs for Representation Learning. Advances in Neural Information Processing Systems, 2022.
  3. Knowledge Distillation: A Good Teacher is Patient and Consistent. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
  4. Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay. In AAAI Conference on Artificial Intelligence, 2022.
  5. Model Compression. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006.
  6. Extracting Training Data from Diffusion Models. arXiv:2301.13188, 2023.
  7. Extracting Training Data from Large Language Models. In USENIX Security Symposium, 2021.
  8. Towards Evaluating the Robustness of Neural Networks. In IEEE Symposium on Security and Privacy, 2017.
  9. Data-Free Learning of Student Networks. In IEEE/CVF International Conference on Computer Vision, 2019.
  10. On the Efficacy of Knowledge Distillation. In IEEE/CVF International Conference on Computer Vision, 2019.
  11. Data-Free Network Quantization with Adversarial Knowledge Distillation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.
  12. RandAugment: Practical Automated Data Augmentation with a Reduced Search Space. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.
  13. Revisiting Batch Norm Initialization. In European Conference on Computer Vision, 2022.
  14. Scaling Vision Transformers to 22 Billion Parameters. In International Conference on Machine Learning, 2023.
  15. ImageNet: A Large-Scale Hierarchical Image Database. In IEEE Conference on Computer Vision and Pattern Recognition, 2009.
  16. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations, 2021.
  17. Contrastive Model Inversion for Data-Free Knolwedge Distillation GitHub. https://github.com/zju-vipa/CMI. Accessed: September 15, 2023.
  18. Contrastive Model Inversion for Data-Free Knolwedge Distillation. In International Joint Conference on Artificial Intelligence, 2021.
  19. Born Again Neural Networks. In International Conference on Machine Learning, 2018.
  20. How Much Data Are Augmentations Worth? An Investigation into Scaling Laws, Invariance, and Implicit Regularization. In International Conference on Learning Representations, 2023.
  21. Compressing Deep Convolutional Networks using Vector Quantization. arXiv:1412.6115, 2014.
  22. Generative Adversarial Nets. In Advances in Neural Information Processing Systems, 2014.
  23. On Calibration of Modern Neural Networks. In International Conference on Machine Learning, 2017.
  24. Reconstructing Training Data from Trained Neural Networks. Advances in Neural Information Processing Systems, 2022.
  25. Deep Residual Learning for Image Recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2016.
  26. Knowledge Distillation with Adversarial Samples Supporting Decision Boundary. In AAAI Conference on Artificial Intelligence, 2019.
  27. Distilling the Knowledge in a Neural Network. arXiv Preprint arXiv:1503.02531, 2015.
  28. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In International Conference on Machine Learning, 2015.
  29. S. A. Janowsky. Pruning Versus Clipping in Neural Networks. Physical Review A, 1989.
  30. D. Jones. Good Practice in (Pseudo) Random Number Generation for Bioinformatics Applications. Technical report, University College London, 2010.
  31. Auto-Encoding Variational Bayes. arXiv:1312.6114], 2013.
  32. Segment Anything. arXiv:2304.02643, 2023.
  33. Big Transfer (BiT): General Visual Representation Learning. In European Conference on Computer Vision, 2020.
  34. A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009.
  35. Adversarial Examples in the Physical World, 2016.
  36. GraN: An Efficient Gradient-Norm Based Detector for Adversarial and Misclassified examples. In European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 2020.
  37. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. In European Conference on Computer Vision, 2018.
  38. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations, 2018.
  39. Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy. In International Conference on Learning Representations, 2018.
  40. Zero-Shot Knowledge Distillation in Deep Networks. In International Conference on Machine Learning, 2019.
  41. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In IEEE Symposium on Security and Privacy, 2016.
  42. Distillation-Based Training for Multi-Exit Architectures. In IEEE/CVF International Conference on Computer Vision, 2019.
  43. D. Picard. torch.manual_seed(3407) Is All You Need: On the Influence of Random Seeds in Deep Learning Architectures for Computer Vision. arXiv preprint arXiv:2109.08203, 2021.
  44. Barrage of Random Transforms for Adversarially Robust Defense. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019.
  45. FitNets: Hints for Thin Deep Nets. In International Conference on Learning Representations, 2015.
  46. Wide Residual Networks. In British Machine Vision Conference, 2016.
  47. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018.
  48. Generative Adversarial Networks (GANs) Challenges, Solutions, and Future Directions. ACM Computing Surveys, 2021.
  49. MEAL v2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet Without Tricks. In Advances in Neural Information Processing Systems Workshops, 2020.
  50. D. Shreiner et al. OpenGL Programming Guide: The Official Guide to Learning OpenGL, Versions 3.0 and 3.1. Pearson Education, 2009.
  51. Edge AI: A Survey. Internet of Things and Cyber-Physical Systems, 2023.
  52. Intriguing Properties of Neural Networks. arXiv:1312.6199, 2013.
  53. Mean Teachers are Better Role Models: Weight-Averaged Consistency Targets Improve Semi-Supervised Deep Learning Results. Advances in Neural Information Processing Systems, 2017.
  54. Contrastive Representation Distillation. In International Conference on Learning Representations, 2020.
  55. Knowledge Representation of Training Data with Adversarial Examples Supporting Decision Boundary. IEEE Transactions on Information Forensics and Security, 2023.
  56. TwiGL. TwiGL. https://github.com/doxas/twigl. Accessed: June 29, 2023.
  57. Do Deep Convolutional Nets Really Need to be Deep and Convolutional? In International Conference on Learning Representations, 2017.
  58. Attention Is All You Need. In Advances in Neural Information Processing Systems, 2017.
  59. A Survey on Deploying Mobile Deep Learning Applications: A Systemic and Technical Perspective. Digital Communications and Networks, 2022.
  60. Z. Wang. Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis. In AAAI Conference on Artificial Intelligence, 2021.
  61. Feature Denoising for Improving Adversarial Robustness. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019.
  62. Self-Training with Noisy Student Improves ImageNet Classification. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
  63. Aggregated Residual Transformations for Deep Neural Networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2017.
  64. Dreaming to Distill: Data-Free Knowledge Transfer via DeepInversion. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
  65. Knowledge Extraction with No Observable Data. Advances in Neural Information Processing Systems, 2019.
  66. MixUp: Beyond Empirical Risk Minimization. In International Conference on Learning Representations, 2018.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.