Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Frequency Attention for Knowledge Distillation (2403.05894v1)

Published 9 Mar 2024 in cs.CV

Abstract: Knowledge distillation is an attractive approach for learning compact deep neural networks, which learns a lightweight student model by distilling knowledge from a complex teacher model. Attention-based knowledge distillation is a specific form of intermediate feature-based knowledge distillation that uses attention mechanisms to encourage the student to better mimic the teacher. However, most of the previous attention-based distillation approaches perform attention in the spatial domain, which primarily affects local regions in the input image. This may not be sufficient when we need to capture the broader context or global information necessary for effective knowledge transfer. In frequency domain, since each frequency is determined from all pixels of the image in spatial domain, it can contain global information about the image. Inspired by the benefits of the frequency domain, we propose a novel module that functions as an attention mechanism in the frequency domain. The module consists of a learnable global filter that can adjust the frequencies of student's features under the guidance of the teacher's features, which encourages the student's features to have patterns similar to the teacher's features. We then propose an enhanced knowledge review-based distillation model by leveraging the proposed frequency attention module. The extensive experiments with various teacher and student architectures on image classification and object detection benchmark datasets show that the proposed approach outperforms other knowledge distillation methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Variational information distillation for knowledge transfer. In CVPR, 2019.
  2. Wasserstein contrastive representation distillation. In CVPR, 2021.
  3. Digital Image Processing (3rd Edition). Prentice-Hall, Inc., USA, 2006.
  4. Deep residual learning for image recognition. In CVPR, 2016.
  5. Channel Pruning for Accelerating Very Deep Neural Networks. In ICCV, 2017.
  6. A comprehensive overhaul of feature distillation. In ICCV, 2019.
  7. Distilling the Knowledge in a Neural Network. In NIPS Deep Learning and Representation Learning Workshop, 2014.
  8. Quantized neural networks: Training neural networks with low precision weights and activations. The Journal of Machine Learning Research, 2017.
  9. Show, attend and distill: Knowledge distillation via attention-based feature matching. In AAAI, 2021.
  10. Thinet: A filter level pruning method for deep neural network compression. In ICCV, 2017.
  11. Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. In ICLR, 2017.
  12. Learning multiple layers of features from tiny images. 2009.
  13. Feature pyramid networks for object detection. In CVPR, 2017.
  14. Microsoft coco: Common objects in context. In ECCV, 2014.
  15. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In ECCV, 2018.
  16. Importance estimation for neural network pruning. In CVPR, 2019.
  17. Relational knowledge distillation. In CVPR, 2019.
  18. Learning deep representations with probabilistic knowledge transfer. In ECCV, 2018.
  19. Distilling knowledge via knowledge review. In CVPR, 2021.
  20. Collaborative multi-teacher knowledge distillation for learning low bit-width deep neural networks. In WACV, pages 6435–6443, 2023.
  21. Stand-alone self-attention in vision models. NIPS, 2019.
  22. Faster r-cnn: Towards real-time object detection with region proposal networks. NIPS, 2015.
  23. Fitnets: Hints for thin deep nets. In ICLR, 2015.
  24. Imagenet large scale visual recognition challenge. IJCV, 2015.
  25. Mobilenetv2: Inverted residuals and linear bottlenecks. In CVPR, 2018.
  26. Deep knowledge distillation using trainable dense attention. 2021.
  27. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV, 2017.
  28. Teaching where to look: Attention similarity knowledge distillation for low resolution face recognition. In ECCV, 2022.
  29. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  30. Contrastive representation distillation. In ICLR, 2020.
  31. Similarity-preserving knowledge distillation. In ICCV, 2019.
  32. Attention is all you need. NIPS, 30, 2017.
  33. Distilling object detectors with fine-grained feature imitation. In CVPR, 2019.
  34. Wide residual networks. In BMVC, 2016.
  35. LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks. In ECCV, 2018.
  36. Decoupled knowledge distillation. In CVPR, 2022.
  37. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. CoRR, 2016.
Citations (6)

Summary

We haven't generated a summary for this paper yet.