Papers
Topics
Authors
Recent
2000 character limit reached

Occupancy Ray Sampling for 3D Scene Inference

Updated 2 December 2025
  • Occupancy Ray Sampling is a ray-centric method that infers 3D occupancy by evaluating probabilistic and semantic values along rays from image pixels or sensor positions.
  • It leverages semantic guidance and occupancy networks to perform empty-space skipping and importance sampling, significantly boosting efficiency and scene coverage.
  • Empirical results demonstrate that ORS improves reconstruction accuracy and computational performance over legacy voxel grid and inverse sensor approaches.

Occupancy Ray Sampling (ORS) is a ray-centric methodology for sampling, inferring, and rendering 3D occupancy—probabilistic or semantic—by evaluating occupancy values along discrete or continuous rays projected from image pixels or sensor positions into 3D space. ORS has emerged as a foundational component in neural scene representation, autonomous driving, and probabilistic mapping frameworks. It replaces or augments legacy approaches such as uniform voxel grid sampling and ad-hoc inverse sensor models by leveraging semantic guidance, occupancy-network-driven empty-space skipping, or forward sensor models to improve sampling efficiency, spatial coverage, instance-level awareness, and geometric fidelity.

1. Mathematical Formulations of ORS

ORS generalizes the query of occupancy along rays parameterized by camera intrinsics, extrinsics, and view geometry. For a pixel (u,v)(u,v), a ray r(u,v)r(u,v) is defined:

r(u,v)=normalize(RTK−1[u,v,1]T)r(u,v) = \text{normalize}(R^T K^{-1} [u, v, 1]^T)

where KK is the intrinsic matrix, RR the rotation (extrinsic), and tt the camera origin. For each of NN sampled depths {dn}\{d_n\}, 3D locations are sn(u,v)=t+dnr(u,v)s_n(u,v) = t + d_n r(u,v).

Occupancy queries may use a discrete grid O[x,y,z]O[x, y, z] via trilinear interpolation, continuous neural fields (as in LeC2^2O-NeRF (Mi et al., 18 Nov 2024)), or probabilistic factor graphs (MRFMap (Shankar et al., 2020)). Semantic occupancy is queried as v(u,v,n)=O(sn(u,v))v(u,v,n) = O(s_n(u,v)) with class probability or binary likelihood.

In instance-aware sampling (ViPOcc SNOG (Feng et al., 15 Dec 2024)), sampling probability over image pixels xx uses a mixture of instance-centric Gaussians and background uniform:

p(x)=(1−γ)∑k=1KπkN(x∣μk,Σk)+γU(x∣s)p(x) = (1 - \gamma) \sum_{k=1}^K \pi_k \mathcal{N}(x | \mu_k, \Sigma_k) + \gamma \mathcal{U}(x | s)

with non-overlap and anchor constraints imposed.

2. Sampling Strategies and Semantic Guidance

Early ORS approaches (Ray-ONet (Bian et al., 2021), CLONeR (Carlson et al., 2022)) apply uniform sampling of depths along each camera ray, enabling O(N2)\mathcal{O}(N^2) scaling in network queries versus cubic grid approaches. More recent systems emphasize semantic, instance-aware, and importance-weighted sampling:

  • SNOG Sampler (ViPOcc): Allocates ray samples according to object instance masks obtained from Grounding DINO and SAM, weighting the Gaussian attractors by log-area, and rejecting overlaps to maximize spatial entropy (Feng et al., 15 Dec 2024).
  • Importance Sampling (CLONeR): Divides samples into uniform and occupancy-weighted halves, favoring depths where interpolated occupancy probability exceeds 0.5, and performing inverse-CDF sampling for concentration near occupied regions (Carlson et al., 2022).
  • Occupancy Network-based Skipping (LeC2^2O-NeRF): Leverages learned occupancy networks to discard samples classified as "empty" prior to radiance field evaluation, pruning up to 85% of computational load (Mi et al., 18 Nov 2024).
  • Semantic Fusion (DualDiff): ORS produces rich per-view features aligned with each camera, fusing explicit geometry and dense semantic cues for multi-modal diffusion synthesis (Li et al., 3 May 2025).

3. Algorithmic Frameworks and Pseudocode

Implementations typically follow a multi-stage process: ray generation, depth sampling, occupancy query, and spatial filtering. Example pseudocode (DualDiff (Li et al., 3 May 2025), ViPOcc SNOG (Feng et al., 15 Dec 2024)) applies:

1
2
3
4
5
6
7
8
for each pixel (u, v):
    ray_dir = normalize(R^T * K^-1 * [u, v, 1]^T)
    for n in 1..N:
        point = t + depths[n] * ray_dir
        if inside_grid(point):
            v[u][v][n] = O[point]
        else:
            v[u][v][n] = 0

Instance-aware ORS further samples xx from the SNOG mixture, subject to non-overlap:

1
2
3
4
5
6
7
8
while len(X) < M:
    if random() < gamma:
        x ~ Uniform(background)
    else:
        k ~ Categorical(pi_k)
        x ~ N(mu_k, Sigma_k)
    if all(norm(x-x_i) >= sqrt(2)*l for x_i in X):
        X.append(x)

4. Occupancy Network Integration and Empty-space Skipping

Neural fields may incorporate occupancy estimation to guide ORS:

  • LeC2^2O-NeRF uses an occupancy MLP to assign each point to "scene" or "empty" experts, with imbalanced loss LoL_o and density loss LdL_d regulating the fraction and spatial distribution of occupied vs. empty samples (Mi et al., 18 Nov 2024).
  • CLONeR fuses occupancy grid learning (via log-odds SGD and interpolants) with LiDAR and camera ray sampling, updating both the geometry and color MLPs in alternating passes (Carlson et al., 2022).
  • MRFMap models occupancy as marginal probability after loopy belief propagation in a Markov field, evaluating joint potentials over rays and integrating sensor noise (Shankar et al., 2020).

ORS can thus be used for both direct occupancy querying and as a mechanism for empty-space skipping to accelerate photometric or semantic rendering.

5. Empirical Performance and Evaluations

ORS consistently yields improvements in data efficiency, rendering accuracy, and scene coverage. Quantitative gains include:

  • ViPOcc SNOG: On KITTI-360, valid rays per iteration NvN_v rose by 5.4% (4.10k vs 3.89k), while coverage of crucial instances ψv\psi_v increased by 413.7% (36.83% vs 7.17%). Addition of SNOG lifted invisible-scene accuracy by 3 ppt (Feng et al., 15 Dec 2024).
  • LeC2^2O-NeRF: Occupancy-guided skipping improved PSNR by 0.8–2.4 dB on Block-NeRF and 0.5–2.4 dB on Mega-NeRF, with 1/15th of the grid parameters and up to 85% of empty points pruned (Mi et al., 18 Nov 2024).
  • Ray-ONet: Achieved state-of-the-art 3D reconstruction on ShapeNet at 128³ resolution with ∼\sim20× speed-up over ONet+MISE at nearly identical memory (Bian et al., 2021).
  • CLONeR: ORS halved depth error (SILog 0.101 vs 0.353) and improved novel view PSNR by ∼\sim2.4 dB (Carlson et al., 2022).
  • MRFMap: Map accuracy of 0.917 vs 0.118 (OctoMap) in simulation; runs two orders of magnitude faster at 0.01 m voxel size (Shankar et al., 2020).

6. Application Domains and Extensions

ORS underpins multiple application areas:

  • Autonomous Driving: ViPOcc and DualDiff use ORS for camera-centric BEV prediction and conditioning multi-view diffusion processes. Semantic-guided sampling enables instance-level performance in dynamic scenes (Feng et al., 15 Dec 2024, Li et al., 3 May 2025).
  • Neural Scene Representations: ORS accelerates NeRF training and rendering by skipping empty space and optimizing sample concentration, enabling large-scale urban mapping (Mi et al., 18 Nov 2024, Carlson et al., 2022).
  • Probabilistic 3D Mapping: MRFMap employs ORS in a forward model with probabilistic inference and learned sensor noise for robotic mapping under occlusion and uncertainty (Shankar et al., 2020).
  • Single Image 3D Reconstruction: Ray-ONet demonstrates the computational efficiency and surface accuracy gains of ray-centric occupancy queries over volumetric grids (Bian et al., 2021).

7. Limitations, Implications, and Future Directions

Current ORS frameworks remain sensitive to errors in semantic instance masks, occupancy network calibration, grid resolution, and sensor noise modeling. Non-uniform sampling, while effective for instance coverage, may risk neglecting fine structures outside the guidance masks. A plausible implication is that integrating adaptive stratified sampling across both semantic and geometric cues may further boost both efficiency and reconstruction fidelity.

Recent works highlight that hybrid frameworks (occupancy networks + grid-based indices) and multi-modal fusion (via attention mechanisms) enhance both depth estimation and scene synthesis quality—suggesting research progress will emphasize scalability, instance diversity, and robustness to sparse semantics. Future ORS approaches are likely to integrate learned uncertainty, active exploration, and dynamic scene understanding within larger neural, probabilistic, and decision-theoretic systems.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Occupancy Ray Sampling (ORS).