Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

3D Point Cloud Pre-training with Knowledge Distillation from 2D Images (2212.08974v1)

Published 17 Dec 2022 in cs.CV

Abstract: The recent success of pre-trained 2D vision models is mostly attributable to learning from large-scale datasets. However, compared with 2D image datasets, the current pre-training data of 3D point cloud is limited. To overcome this limitation, we propose a knowledge distillation method for 3D point cloud pre-trained models to acquire knowledge directly from the 2D representation learning model, particularly the image encoder of CLIP, through concept alignment. Specifically, we introduce a cross-attention mechanism to extract concept features from 3D point cloud and compare them with the semantic information from 2D images. In this scheme, the point cloud pre-trained models learn directly from rich information contained in 2D teacher models. Extensive experiments demonstrate that the proposed knowledge distillation scheme achieves higher accuracy than the state-of-the-art 3D pre-training methods for synthetic and real-world datasets on downstream tasks, including object classification, object detection, semantic segmentation, and part segmentation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yuan Yao (292 papers)
  2. Yuanhan Zhang (29 papers)
  3. Zhenfei Yin (41 papers)
  4. Jiebo Luo (355 papers)
  5. Wanli Ouyang (358 papers)
  6. Xiaoshui Huang (55 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.