Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CO^3: Cooperative Unsupervised 3D Representation Learning for Autonomous Driving (2206.04028v2)

Published 8 Jun 2022 in cs.CV and cs.RO

Abstract: Unsupervised contrastive learning for indoor-scene point clouds has achieved great successes. However, unsupervised learning point clouds in outdoor scenes remains challenging because previous methods need to reconstruct the whole scene and capture partial views for the contrastive objective. This is infeasible in outdoor scenes with moving objects, obstacles, and sensors. In this paper, we propose CO3, namely Cooperative Contrastive Learning and Contextual Shape Prediction, to learn 3D representation for outdoor-scene point clouds in an unsupervised manner. CO3 has several merits compared to existing methods. (1) It utilizes LiDAR point clouds from vehicle-side and infrastructure-side to build views that differ enough but meanwhile maintain common semantic information for contrastive learning, which are more appropriate than views built by previous methods. (2) Alongside the contrastive objective, shape context prediction is proposed as pre-training goal and brings more task-relevant information for unsupervised 3D point cloud representation learning, which are beneficial when transferring the learned representation to downstream detection tasks. (3) As compared to previous methods, representation learned by CO3 is able to be transferred to different outdoor scene dataset collected by different type of LiDAR sensors. (4) CO3 improves current state-of-the-art methods on both Once and KITTI datasets by up to 2.58 mAP. Codes and models will be released. We believe CO3 will facilitate understanding LiDAR point clouds in outdoor scene.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Runjian Chen (20 papers)
  2. Yao Mu (58 papers)
  3. Runsen Xu (13 papers)
  4. Wenqi Shao (89 papers)
  5. Chenhan Jiang (12 papers)
  6. Hang Xu (205 papers)
  7. Zhenguo Li (195 papers)
  8. Ping Luo (340 papers)
Citations (12)