Efficient Point Cloud Classification via Offline Distillation Framework and Negative-Weight Self-Distillation Technique (2409.02020v2)
Abstract: The rapid advancement in point cloud processing technologies has significantly increased the demand for efficient and compact models that achieve high-accuracy classification. Knowledge distillation has emerged as a potent model compression technique. However, traditional KD often requires extensive computational resources for forward inference of large teacher models, thereby reducing training efficiency for student models and increasing resource demands. To address these challenges, we introduce an innovative offline recording strategy that avoids the simultaneous loading of both teacher and student models, thereby reducing hardware demands. This approach feeds a multitude of augmented samples into the teacher model, recording both the data augmentation parameters and the corresponding logit outputs. By applying shape-level augmentation operations such as random scaling and translation, while excluding point-level operations like random jittering, the size of the records is significantly reduced. Additionally, to mitigate the issue of small student model over-imitating the teacher model's outputs and converging to suboptimal solutions, we incorporate a negative-weight self-distillation strategy. Experimental results demonstrate that the proposed distillation strategy enables the student model to achieve performance comparable to state-of-the-art models while maintaining lower parameter count. This approach strikes an optimal balance between performance and complexity. This study highlights the potential of our method to optimize knowledge distillation for point cloud classification tasks, particularly in resource-constrained environments, providing a novel solution for efficient point cloud analysis.
- doi:10.1109/TPAMI.2021.3067100.
- doi:10.1109/CVPR.2017.16.
- doi:10.1109/ICCV48922.2021.00492.
- doi:10.1109/ICCV48922.2021.01595.
- doi:10.1109/CVPR52688.2022.01165.
- doi:10.24963/ijcai.2021/444.
- doi:10.1109/CVPR52688.2022.01163.
- doi:10.1109/ICCV51070.2023.01576.
- arXiv:1412.6550.
- doi:10.1109/CVPR.2019.00409.
- doi:10.1109/ICCV.2019.00145.
- doi:10.1109/CVPR46437.2021.00497.
- doi:10.1609/aaai.v35i8.16865. URL https://ojs.aaai.org/index.php/AAAI/article/view/16865
- arXiv:2407.00921. URL https://arxiv.org/abs/2407.00921
- doi:10.1109/CVPR.2019.00910.
- doi:10.1109/ICCV.2019.00651.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.