Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CompleteDT: Point Cloud Completion with Dense Augment Inference Transformers (2205.14999v2)

Published 30 May 2022 in cs.CV

Abstract: Point cloud completion task aims to predict the missing part of incomplete point clouds and generate complete point clouds with details. In this paper, we propose a novel point cloud completion network, namely CompleteDT. Specifically, features are learned from point clouds with different resolutions, which is sampled from the incomplete input, and are converted to a series of \textit{spots} based on the geometrical structure. Then, the Dense Relation Augment Module (DRA) based on the transformer is proposed to learn features within \textit{spots} and consider the correlation among these \textit{spots}. The DRA consists of Point Local-Attention Module (PLA) and Point Dense Multi-Scale Attention Module (PDMA), where the PLA captures the local information within the local \textit{spots} by adaptively measuring weights of neighbors and the PDMA exploits the global relationship between these \textit{spots} in a multi-scale densely connected manner. Lastly, the complete shape is predicted from \textit{spots} by the Multi-resolution Point Fusion Module (MPF), which gradually generates complete point clouds from \textit{spots}, and updates \textit{spots} based on these generated point clouds. Experimental results show that, because the DRA based on the transformer can learn the expressive features from the incomplete input and the MPF can fully explore these feature to predict the complete input, our method largely outperforms the state-of-the-art methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jun Li (780 papers)
  2. Shangwei Guo (32 papers)
  3. Shaokun Han (2 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.