Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 81 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Feature Adversarial Distillation for Point Cloud Classification (2306.14221v2)

Published 25 Jun 2023 in cs.CV, cs.AI, and cs.LG

Abstract: Due to the point cloud's irregular and unordered geometry structure, conventional knowledge distillation technology lost a lot of information when directly used on point cloud tasks. In this paper, we propose Feature Adversarial Distillation (FAD) method, a generic adversarial loss function in point cloud distillation, to reduce loss during knowledge transfer. In the feature extraction stage, the features extracted by the teacher are used as the discriminator, and the students continuously generate new features in the training stage. The feature of the student is obtained by attacking the feedback from the teacher and getting a score to judge whether the student has learned the knowledge well or not. In experiments on standard point cloud classification on ModelNet40 and ScanObjectNN datasets, our method reduced the information loss of knowledge transfer in distillation in 40x model compression while maintaining competitive performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.
  2. “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” Advances in neural information processing systems, vol. 30, 2017.
  3. “Dynamic graph cnn for learning on point clouds,” Acm Transactions On Graphics (tog), 2019.
  4. “Deepgcns: Can gcns go as deep as cnns?,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019.
  5. “Learning inner-group relations on point clouds,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
  6. “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.
  7. “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.
  8. “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122, 2015.
  9. “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  10. “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  11. “Segmenter: Transformer for semantic segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
  12. “Pct: Point cloud transformer,” Computational Visual Media, vol. 7, no. 2, 2021.
  13. “Point transformer,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
  14. “Exploring self-attention for image recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
  15. “Rethinking network design and local geometry in point cloud: A simple residual mlp framework,” arXiv preprint arXiv:2202.07123, 2022.
  16. “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, vol. 2, no. 7, 2015.
  17. “Fitnets: Hints for thin deep nets,” arXiv preprint arXiv:1412.6550, 2014.
  18. “Relational knowledge distillation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019.
  19. “Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer,” in ICLR, 2017.
  20. “A task-driven scene-aware lidar point cloud coding framework for autonomous vehicles,” IEEE Transactions on Industrial Informatics, 2022.
  21. “An advanced lidar point cloud sequence coding scheme for autonomous driving,” in Proceedings of the 28th ACM International Conference on Multimedia, 2020.
  22. “Empowering knowledge distillation via open set recognition for robust 3d point cloud classification,” Pattern Recognition Letters, vol. 151, 2021.
  23. “Pointdistiller: Structured knowledge distillation towards efficient and compact 3d detection,” arXiv preprint arXiv:2205.11098, 2022.
  24. “Point-to-voxel knowledge distillation for lidar semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
  25. “3d shapenets: A deep representation for volumetric shapes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015.
  26. “Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.