Papers
Topics
Authors
Recent
2000 character limit reached

EdgeNeRF: Edge-Enhanced Neural Radiance Fields

Updated 11 January 2026
  • EdgeNeRF is a class of neural radiance field techniques that integrate image-space and density-gradient edges to preserve and enhance geometric boundaries.
  • It employs edge-guided regularization during training and volumetric edge extraction post-training to reduce artifacts and improve reconstruction fidelity.
  • Demonstrated on datasets like LLFF and DTU, EdgeNeRF methods yield superior PSNR, SSIM, and geometric accuracy, with promising applications in AR/VR and resource-constrained environments.

EdgeNeRF refers to a class of neural radiance field (NeRF) architectures and algorithms that leverage edge information—whether from image-space detector outputs or volumetric density gradients—to enhance 3D reconstruction fidelity, regularization, or geometric extraction in radiance field modeling. Specific EdgeNeRF variants address different objectives: from edge-guided regularization in sparse-view scenarios to 3D reconstruction via density-gradient filtering on trained NeRFs. This article synthesizes methods and results from "EdgeNeRF: Edge-Guided Regularization for Neural Radiance Fields from Sparse Views" (Yu et al., 4 Jan 2026) and "3D Density-Gradient based Edge Detection on Neural Radiance Fields (NeRFs) for Geometric Reconstruction" (Jäger et al., 2023), covering theoretical foundations, algorithmic strategies, implementation details, and evaluation metrics.

1. Motivation and Theoretical Principles

NeRF models achieve photorealistic synthesis from multi-view images by optimizing a volumetric MLP to regress radiance and density fields. However, under sparse view regimes or when direct density-thresholding is used for geometry extraction, artifacts appear: spurious volumes, loss of sharp boundaries, and incomplete surfaces. EdgeNeRF approaches are motivated by two principles:

  • Image-space edges typically correspond to scene discontinuities: Abrupt changes in depth or surface normal manifest as edges in projected images.
  • Density-gradient analysis in NeRF volumes robustly signals 3D boundaries: Local density variations are indicative of material interfaces, allowing edge detection independent of absolute density thresholds.

EdgeNeRF integrates these principles to either guide regularization terms during training (Yu et al., 4 Jan 2026) or to post-process trained NeRF fields into geometric point clouds or meshes (Jäger et al., 2023).

2. Edge-Guided Regularization in Sparse-View NeRF

The EdgeNeRF framework (Yu et al., 4 Jan 2026) introduces edge-guided regularization to address performance degradation of NeRF under sparse input views.

  • Edge Extraction: For each training image IiI_i, DexiNed (a learned edge detector) generates score maps Ei(x,y)E_i(x,y), thresholded and dilated to produce binary edge masks Bi(x,y)B'_i(x,y).
  • Patch Sampling: Random 2×22\times 2 image patches are sampled; only non-edge pixels within each patch contribute to regularization.
  • Depth and Normal Smoothing: Regularization is restricted to non-edge regions to preserve geometric discontinuities. For depth, the loss is:

Lz=m=1Mi=14max(em,izm,izˉmτ1,0)\mathcal{L}_z = \sum_{m=1}^M \sum_{i=1}^4 \max\bigl(e_{m,i}|z_{m,i}-\bar{z}_m|-\tau_1, 0\bigr)

where zˉm\bar{z}_m is the mean depth among non-edge pixels in the patch and em,ie_{m,i} indicates non-edge status.

  • Normal Consistency: Surface normals, computed from gradients of the learned density field, are regularized similarly:

Ln=m=1Mi=14max(em,inm,inˉm22τ2,0)\mathcal{L}_n = \sum_{m=1}^M \sum_{i=1}^4 \max\bigl(e_{m,i}\|n_{m,i}-\bar{n}_m\|_2^2 - \tau_2, 0\bigr)

  • Optimization: The total loss,

L=λ1Lc+λ2Lz+λ3Ln\mathcal{L} = \lambda_1\mathcal{L}_c + \lambda_2\mathcal{L}_z + \lambda_3\mathcal{L}_n

combines photometric, edge-gated depth, and normal terms, with hyperparameters selected for each dataset.

This approach maintains geometric sharpness at boundaries and suppresses artifacts, with quantitative gains in PSNR (up to +0.53 dB), SSIM, and perceptual metrics compared to global regularizers (e.g., RegNeRF).

3. 3D Density-Gradient Edge Extraction in NeRF Volumes

EdgeNeRF (Jäger et al., 2023) employs volumetric edge detection filters to post-process trained NeRFs, extracting iso-surface curves or meshes without requiring thresholds on density values.

  • Voxelization: The continuous NeRF density field D:R3R0D:\mathbb{R}^3 \to \mathbb{R}_{\ge0} is sampled on a regular 3D grid.
  • 3D Gradient Filters:
    • Sobel Filter: Computes directional derivatives in x,y,zx,y,z following classic 3D Sobel convolution, producing gradient magnitude at each voxel.
    • Canny Filter (3D): Applies Gaussian smoothing, gradient computation, non-maximum suppression, double thresholding, and hysteresis to robustly extract edge voxels, adapting standard 2D Canny steps to 3D.
    • Laplacian of Gaussian (LoG): Detects second derivative zero-crossings, suitable for fine surface detail extraction but more sensitive to noise.
  • Thresholding: Edge masks are generated by relative thresholding of gradient magnitudes, normalizing across scenes to avoid per-dataset tuning.
  • Surface Generation: Edge voxels are aggregated into point clouds, colored via NeRF outputs, or meshed with Marching Cubes/Poisson reconstruction.

Canny-based extraction achieves high completeness (96% at 1.5mm) and correctness (0.80mm cloud-to-cloud distance), outperforming Sobel and LoG in gap elimination and uniformity.

4. Algorithmic Workflow and Implementation Details

1
2
3
4
5
6
7
8
9
10
11
12
13
for iter in 1..N_iters:
    sample random image I_i and its edge mask B'_i
    sample M patches {P^I_m, P^{B'}_m}
    for each patch m:
        for each pixel i{1..4}:
            cast ray r_{m,i}, sample  {σ,c}
            render color C_{m,i}, depth z_{m,i}, normal n_{m,i}
        form photometric loss L_c
        compute e_{m,i} = 1  P^{B'}_m[i]
        compute patch means \bar{z}_m, \bar{n}_m
        compute L_z, L_n via hinge-style formulas
    total loss L = λ1 L_c + λ2 L_z + λ3 L_n
    backprop  update Θ

1
2
3
4
5
6
7
8
9
10
11
12
13
for (i,j,k) in grid:
    D_{i,j,k}  D(x_i, y_j, z_k)
if filter == 'Sobel':
    compute gradients G_{x}, G_{y}, G_{z}; Δ  sqrt(sum of squares)
elif filter == 'Canny':
    D_smooth  Gaussian(D)
    compute gradients; apply NMS, thresholding, hysteresis
    Δ  hysteresis mask
elif filter == 'LOG':
    Δ^2  LoG(D)
for (i,j,k):
    if Δ_{i,j,k}  threshold:
        mark edge voxel

5. Experimental Evaluation and Comparative Results

EdgeNeRF methods are validated on LLFF and DTU datasets, and against point cloud baselines and global regularization methods.

  • Regularization Approach (Yu et al., 4 Jan 2026):
    • EdgeNeRF outperforms RegNeRF (e.g., LLFF PSNR: 19.42 vs. 19.08).
    • Gains are most pronounced in SSIM, indicating superior edge preservation.
    • Normal regularization adds moderate computational cost; pure depth regularization incurs negligible overhead.
    • Ablations show that edge guidance is essential—removing it collapses true boundary fidelity.
    • Edge extraction via DexiNed slightly exceeds Canny in low-contrast regions.
  • Gradient Extraction (Jäger et al., 2023):
    • Canny outperforms Sobel and LOG in both completeness and correctness.
    • The method generalizes across varying density scaling, avoiding scene-specific parameter tuning.
    • Point-based extraction yields dense, gap-free reconstructions; mesh extraction is optional for further post-processing.

6. Limitations and Directions for Future Research

  • EdgeNeRF Regularization:
    • Performance degrades in highly textured regions or with severe view sparsity.
    • Normal loss introduces computational overhead; further efficiency gains may be possible.
    • Smoothing in non-edge regions can impact semantic fidelity (perceptual LPIPS increases in some cases).
    • Future directions proposed include semantic-aware smoothing and integration of higher-level priors.
  • Volumetric Edge Extraction:
    • LOG filters are sensitive to rough surfaces; Canny requires careful kernel and threshold selection.
    • Interior artifacts may persist without additional flood-fill or interior-exclusion steps.
    • Extending filters to anisotropic density fields or hybrid representations is an open problem.

7. Relationship to Broader Edge-Aware NeRF Variants

EdgeNeRF strategies complement other edge-focused NeRF models such as NEF (Neural Edge Field) (Ye et al., 2023), which reconstructs 3D parametric feature curves by training an implicit edge-density field from multi-view edge-detected images. While NEF focuses on explicit curve extraction, EdgeNeRF regularization and gradient filtering are applicable to both geometry extraction and hybrid rendering pipelines. Notably, MixRT (Li et al., 2023) and EDR-NR (Yuan et al., 9 Oct 2025) address rendering efficiency on edge devices through hybrid representations or hardware-aware scheduling, not edge-aware fidelity; EdgeNeRF fills the gap for edge-specific geometric quality.


EdgeNeRF, as detailed in (Yu et al., 4 Jan 2026) and (Jäger et al., 2023), encompasses both training-time edge-guided regularization for sparse-view NeRFs and post-training edge extraction via volumetric gradient analysis. Both lines of research establish edge awareness as a crucial strategy for improving geometric fidelity, suppressing artifacts, and enabling high-quality 3D reconstruction from limited or noisy data, with extensibility into emerging AR/VR and resource-constrained environments.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to EdgeNeRF.