Papers
Topics
Authors
Recent
Search
2000 character limit reached

RegFreeNet: Registration-Free Implant Planning

Updated 28 January 2026
  • The paper introduces a registration-free deep learning framework that predicts dental implant position and orientation directly from masked postoperative CBCT scans.
  • It leverages a neighboring distance perception module and a slope-aware dual-decoder to extract anatomical context and perform angular regression for implant alignment.
  • Evaluations on the ImplantFairy dataset demonstrate improved Dice scores and robust generalization compared to traditional CNN models without requiring pre/postoperative registration.

RegFreeNet is a registration-free deep learning framework devised for automated implant position prediction in cone-beam computed tomography (CBCT)–based 3D dental implant planning. Departing from previous approaches requiring spatial registration between pre- and postoperative scans, RegFreeNet leverages a masking paradigm to train directly on postoperative CBCT data, circumventing the need for expensive, error-prone label transfer procedures and enabling large-scale, multi-center dataset construction. Its core architecture integrates a neighboring distance perception module for anatomical context extraction and a slope-aware dual-decoder capturing both position and orientation of dental implants solely from unregistered, masked postoperative scans (Yang et al., 21 Jan 2026).

1. Input Masking and Registration-Free Paradigm

RegFreeNet operates on single-channel CBCT volumes V∈RC×D×H×WV\in\mathbb{R}^{C\times D\times H\times W} (C=1C=1). The metallic implant region is masked by nulling a cylindrical voxel volume surrounding the annotated implant axis. This masking renders direct implant cues inaccessible, compelling the model to predict implant location and orientation from surrounding dental and osseous structures alone.

This methodology obviates any requirement for pre/postoperative image registration, as no explicit mapping is performed. All postoperative CBCT scans become viable for model training, independent of paired preoperative data. This paradigm directly addresses limitations inherent in conventional approaches, specifically: (i) commercial surgical guide software's inability to export implant positional data from preoperative scans; (ii) the burdensome, accuracy-dependent registration process; and (iii) difficulties in aggregating multi-center datasets due to the rarity of matched CBCT pairs.

2. Architecture Overview

The RegFreeNet architecture is composed of:

  • Neighboring Distance Perception (NDP) Module: Functions as the initial encoding layer, tasked with anatomical context recovery after implant masking.
  • 3D U-Net Encoder: Follows the NDP module, structured with four down-sampling blocks and producing hierarchical features M1(e),…,M4(e)M^{(e)}_1, \dotsc, M^{(e)}_4 corresponding to successively coarser spatial resolutions.
  • Dual-Branch Decoder: Comprises two decoders—a position regression branch (IPPB) for volumetric implant mask prediction and a slope prediction branch (SPB) for axis orientation regression.

Workflow

Stage Input/Output Operation
Input VV CBCT volume, implant region masked
NDP Module FNDPF^{\text{NDP}} Multi-scale, graph-based feature extraction of neighboring teeth
Encoder M1…4(e)M^{(e)}_{1\ldots4} Standard U-Net blocks, hierarchical feature encoding
IPPB Y^(x,y,z)\hat{Y}(x,y,z) 3D upsampling decoder, outputs per-voxel implant presence probabilities
SPB k^=[k^1,k^2]\hat{k} = [\hat{k}^1, \hat{k}^2] MLP that infers angular slopes (implant axis) from deepest encoder features

The IPPB restores the coarse bottleneck representation M4(e)M^{(e)}_4 to full input resolution to yield a single-channel probability map. The SPB, a compact multilayer perceptron, performs angular regression for the implant axis.

3. Neighboring Distance Perception Module

The NDP module mitigates information loss caused by implant masking through multi-scale anatomical context modeling and explicit graph-based integration of neighboring tooth relationships.

  • Multi-Scale Dilated Convolutions: Apply 3×3×33\times3\times3 convolutions with dilation factors C=1C=10:

C=1C=11

These produce receptive fields capturing variable spatial extents.

  • KeyPoint Extraction (KNet): Each C=1C=12 is processed into C=1C=13 (with C=1C=14 keypoints) via a 3D convolution and adaptive pooling.
  • Graph Convolutional Network: Keypoints C=1C=15 are interpreted as nodes in a fully-connected graph and passed through two graph-convolutional layers:

C=1C=16

  • Reshaping and Fusion: C=1C=17 is upsampled to match original volume shape and fused with C=1C=18 through residual addition:

C=1C=19

Finally, all M1(e),…,M4(e)M^{(e)}_1, \dotsc, M^{(e)}_40 are concatenated (or summed) and projected by a M1(e),…,M4(e)M^{(e)}_1, \dotsc, M^{(e)}_41 convolution to yield M1(e),…,M4(e)M^{(e)}_1, \dotsc, M^{(e)}_42.

This module enables the model to exploit both local and nonlocal anatomical context in the vicinity of the masked implant site.

4. Slope Prediction and Loss Function

Slope Representation

Implant orientation is parameterized by angular slopes M1(e),…,M4(e)M^{(e)}_1, \dotsc, M^{(e)}_43, analytically derived from the annotated axis points:

M1(e),…,M4(e)M^{(e)}_1, \dotsc, M^{(e)}_44

where M1(e),…,M4(e)M^{(e)}_1, \dotsc, M^{(e)}_45 is the number of 3D axis points for implant M1(e),…,M4(e)M^{(e)}_1, \dotsc, M^{(e)}_46.

Network Structure and Loss

SPB flattens M1(e),…,M4(e)M^{(e)}_1, \dotsc, M^{(e)}_47, passes it through two fully connected layers (with ReLU and dropout), and outputs M1(e),…,M4(e)M^{(e)}_1, \dotsc, M^{(e)}_48. The branch is trained with the M1(e),…,M4(e)M^{(e)}_1, \dotsc, M^{(e)}_49 loss:

VV0

The total loss combines segmentation and slope components:

VV1

with segmentation loss

VV2

VV3

VV4

where VV5 and VV6 are ground-truth and predicted voxel-wise mask probabilities.

5. Construction of the ImplantFairy Dataset

RegFreeNet’s evaluation and training leverage the publicly accessible ImplantFairy dataset:

  • Composition: 1,622 CBCT scans acquired at Shenzhen University General Hospital.
  • Scanner/Resolution: KaVo 3D eXam; isotropic voxel spacing 0.2 mm.
  • Partition: 1,369 scans for training, 253 for testing.
  • Annotation Protocol: For each implant, clinicians annotate the apex (VV7), midpoint (VV8), and base (VV9). A volumetric binary mask is constructed by sweeping a cylinder (radius 14 voxels) along the axis from FNDPF^{\text{NDP}}0 to FNDPF^{\text{NDP}}1.

The registration-free masking permits the aggregation of all postoperative scans regardless of preoperative data, facilitating multi-center contributions and large-scale data curation.

6. Training Protocol, Augmentation, and Inference

Training utilizes the PyTorch and MONAI frameworks on NVIDIA A40 GPUs with the following specifications:

  • Patch Extraction: Random crop to FNDPF^{\text{NDP}}2 voxels, without rotation or elastic deformation.
  • Batch Size: 4
  • Optimizer: AdamW, weight decay FNDPF^{\text{NDP}}3
  • Learning Rate Scheduling: Initial FNDPF^{\text{NDP}}4, linear warmup, followed by cosine annealing
  • Augmentation: Random jitter of implant mask during training to promote spatial context learning.
  • Inference: Overlapped sliding window (25% overlap) at test time for smooth probability predictions
  • Training Duration: 100–200 epochs to convergence

No explicit regularization beyond AdamW weight decay is applied.

7. Quantitative Results and Evaluation Metrics

Performance is assessed using Dice similarity and Intersection-over-Union (IoU) for implant mask prediction.

Internal Evaluation (ImplantFairy Test Set)

Method Dice (%) IoU
RegFreeNet 47.22 0.3555
3D UNet 45.02 0.3325

External Evaluation (No Fine-Tuning)

On 12 scans from Cui et al. and 12 from ToothFairy2:

Method Dice (%) IoU
RegFreeNet 31.87 0.2058
VNet 26.24 0.1645

Results demonstrate that RegFreeNet surpasses nine state-of-the-art CNN/Transformer baselines internally, and generalizes nontrivially to external, unseen datasets, all without any registration process (Yang et al., 21 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to RegFreeNet.