Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-modal Relation Distillation for Unified 3D Representation Learning (2407.14007v2)

Published 19 Jul 2024 in cs.CV and cs.AI

Abstract: Recent advancements in multi-modal pre-training for 3D point clouds have demonstrated promising results by aligning heterogeneous features across 3D shapes and their corresponding 2D images and language descriptions. However, current straightforward solutions often overlook intricate structural relations among samples, potentially limiting the full capabilities of multi-modal learning. To address this issue, we introduce Multi-modal Relation Distillation (MRD), a tri-modal pre-training framework, which is designed to effectively distill reputable large Vision-LLMs (VLM) into 3D backbones. MRD aims to capture both intra-relations within each modality as well as cross-relations between different modalities and produce more discriminative 3D shape representations. Notably, MRD achieves significant improvements in downstream zero-shot classification tasks and cross-modality retrieval tasks, delivering new state-of-the-art performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Huiqun Wang (6 papers)
  2. Yiping Bao (8 papers)
  3. Panwang Pan (14 papers)
  4. Zeming Li (53 papers)
  5. Xiao Liu (402 papers)
  6. Ruijie Yang (26 papers)
  7. Di Huang (203 papers)