Papers
Topics
Authors
Recent
Search
2000 character limit reached

Probabilistic Inclusion Depth (PID)

Updated 24 December 2025
  • PID is a data depth measure that quantifies the centrality of fuzzy and binary contours using a probabilistic inclusion operator.
  • It computes inclusion scores via average measures, ensuring binary consistency, coordinate-agnosticism, and robustness to perturbations.
  • The PID-mean approximation enables linear-time, GPU-accelerated computation, making it scalable for high-dimensional ensemble visualizations.

Probabilistic Inclusion Depth (PID) is a data depth measure designed for centrality ordering and ensemble visualization of fuzzy contours—that is, soft masks output by modern segmentation models as well as conventional binary contours—across arbitrary spatial domains. PID generalizes previous contour depth concepts by employing a probabilistic inclusion operator that supports both fuzzy and crisp representations, accommodates non-uniform grids, and enables scalable computation for large and high-dimensional ensembles (Wu et al., 17 Dec 2025).

1. Probabilistic Inclusion Operator

Let Ω\Omega denote a spatial domain (e.g., a 2D image or 3D volume) equipped with a measure μ\mu (counting for discrete grids, Lebesgue for continuous). A fuzzy contour, or soft mask, is described by a function

u:Ω→[0,1]u : \Omega \to [0,1]

such that u(x)u(x) indicates the probability or degree of membership of xx in the contour. The total mass is m(u)=∫Ωu(x)dμ(x)m(u) = \int_\Omega u(x)d\mu(x). Provided m(u)>0m(u)>0, uu induces a measure πu(E)=1m(u)∫Eu(x)dμ(x)\pi_u(E) = \frac{1}{m(u)} \int_E u(x)d\mu(x) for measurable E⊆ΩE\subseteq \Omega.

Given two fuzzy masks μ\mu0 and μ\mu1, the probabilistic inclusion operator is

μ\mu2

For indicator functions μ\mu3, this reduces to μ\mu4, recovering the continuous-subset operator μ\mu5 used in exact Inclusion Depth (eID).

2. Mathematical Formulation of Probabilistic Inclusion Depth (PID)

Given an ensemble of μ\mu6 fuzzy contours μ\mu7, PID assigns to each μ\mu8 a scalar depth quantifying its centrality: μ\mu9 Here, u:Ω→[0,1]u : \Omega \to [0,1]0 measures the average degree to which u:Ω→[0,1]u : \Omega \to [0,1]1 is contained in the rest, and u:Ω→[0,1]u : \Omega \to [0,1]2 measures how well u:Ω→[0,1]u : \Omega \to [0,1]3 contains others. PID is then defined as

u:Ω→[0,1]u : \Omega \to [0,1]4

This yields a robust, center-outward ordering. In expanded form: u:Ω→[0,1]u : \Omega \to [0,1]5

3. Theoretical Properties

PID and the u:Ω→[0,1]u : \Omega \to [0,1]6 operator exhibit several important properties:

  • Binary consistency: For masks u:Ω→[0,1]u : \Omega \to [0,1]7, PID coincides exactly with eID.
  • Monotonicity in u:Ω→[0,1]u : \Omega \to [0,1]8: u:Ω→[0,1]u : \Omega \to [0,1]9 is monotonic in u(x)u(x)0.
  • Linearity in u(x)u(x)1: u(x)u(x)2.
  • Scale-invariance in u(x)u(x)3: Multiplying u(x)u(x)4 by a positive scalar does not affect u(x)u(x)5.
  • Asymmetry: u(x)u(x)6 in general.
  • Lipschitz continuity: u(x)u(x)7 and PID are Lipschitz continuous in the u(x)u(x)8 norm, providing robustness to mask perturbations.
  • Coordinate-agnostic: PID depends solely on u(x)u(x)9 and is invariant under measure-preserving transformations of xx0, supporting uniform/non-uniform grids and manifolds.

4. Efficient Computation and PID-mean Approximation

Exact PID (Quadratic Complexity)

For xx1 ensemble members of spatial size xx2, full PID evaluation is xx3:

  • For each pair xx4, compute xx5, along with denominators for xx6 and xx7.
  • xx8 is obtained via xx9 as described above.

PID-mean (Linear Complexity)

The ensemble mean mask m(u)=∫Ωu(x)dμ(x)m(u) = \int_\Omega u(x)d\mu(x)0 enables a linear-time surrogate: m(u)=∫Ωu(x)dμ(x)m(u) = \int_\Omega u(x)d\mu(x)1

m(u)=∫Ωu(x)dμ(x)m(u) = \int_\Omega u(x)d\mu(x)2

This involves m(u)=∫Ωu(x)dμ(x)m(u) = \int_\Omega u(x)d\mu(x)3 operations since m(u)=∫Ωu(x)dμ(x)m(u) = \int_\Omega u(x)d\mu(x)4 is precomputed. The approximation error m(u)=∫Ωu(x)dμ(x)m(u) = \int_\Omega u(x)d\mu(x)5 is bounded by the coefficient of variation of m(u)=∫Ωu(x)dμ(x)m(u) = \int_\Omega u(x)d\mu(x)6; if mask masses are homogeneous, PID-mean closely approximates the exact PID.

5. GPU Implementation and Computational Scalability

PID-mean's linear structure allows efficient GPU parallelization:

  • For each m(u)=∫Ωu(x)dμ(x)m(u) = \int_\Omega u(x)d\mu(x)7, a CUDA thread block computes numerators and denominators over m(u)=∫Ωu(x)dμ(x)m(u) = \int_\Omega u(x)d\mu(x)8, exploiting memory coalescence for maximal bandwidth.
  • Within each block, voxels are processed in parallel, accumulating partial sums via shared-memory tree reduction; totals are transferred to the host for final computation of m(u)=∫Ωu(x)dμ(x)m(u) = \int_\Omega u(x)d\mu(x)9.

Empirical performance demonstrates:

Dataset eID (CPU) PID-mean (CPU) PID-mean (GPU)
3D synthetic, m(u)>0m(u)>00k–20k, m(u)>0m(u)>01 vox 50–250 s 4–11 s 1.2–2.0 s (up to 125× eID)
3D, m(u)>0m(u)>02, m(u)>0m(u)>03–m(u)>0m(u)>04 vox 22–216 s 14–29 s 2.3–5.5 s (up to 39× eID)

For large m(u)>0m(u)>05 or high resolutions, eID becomes intractable in memory, while PID-mean (especially on GPU) remains scalable (Wu et al., 17 Dec 2025).

6. Fuzzy Isovalue Modeling and Sensitivity Encoding

PID supports a probabilistic treatment of isovalue selection in scalar field ensembles by integrating over an isovalue distribution. For each scalar field m(u)>0m(u)>06, instead of extracting a sharp isocontour at m(u)>0m(u)>07, define m(u)>0m(u)>08 with density m(u)>0m(u)>09 and set: uu0 This results in a fuzzy, "soft" isocontour capturing how small changes in uu1 affect membership, encoding isolevel sensitivity. Such modeling is comparable to interval volumes and probabilistic marching cubes but specifically adapts to depth-based ensemble visualization via PID. These fuzzy masks directly serve as PID/PID-mean inputs.

7. Experimental Results and Practical Applications

Rank Stability and Consistency

PID yields significantly more stable ensemble rankings than previous methods under isovalue shifts and member removal:

  • For weather-forecast ensembles (N=20), PID achieves a rank-rank Pearson correlation of uu2 across isolevels uu3 m and uu4 m, outperforming eID (uu5).
  • On synthetic contour sets (uu6), PID-mean agrees strongly with eID (uu7) and Prob-IoU (uu8); CBD and ISM show weaker consistency.

Upon removing extreme outliers in 3D ellipsoid data, PID maintains high rank correlation (uu9, πu(E)=1m(u)∫Eu(x)dμ(x)\pi_u(E) = \frac{1}{m(u)} \int_E u(x)d\mu(x)0), exceeding Prob-IoU.

Application Domains

  • Medical segmentation: PID-mean ranks outputs from 31 SegResNet models (224×224×144) in the MSD Brain-Tumour dataset, producing 3D ensemble boxplots that delineate tumour shape uncertainty.
  • Large-scale binary masks: On IXI data (400 hippocampus, 400 ventricle masks), PID-mean extracts central envelopes and medians without the overplotting typical for spaghetti plots.
  • Scalar field ensembles: PID-mean applied to 50×150 ScalarFlow smoke plume reconstructions tracks uncertainty evolution in 3D time-series boxplots.
  • Flexible grids/manifolds: Weight factors Ï€u(E)=1m(u)∫Eu(x)dμ(x)\pi_u(E) = \frac{1}{m(u)} \int_E u(x)d\mu(x)1 enable PID-mean on uniform/non-uniform grids, meshes, or manifolds; the method is coordinate-agnostic.

Scalability extends to tens of thousands of ensemble members and 3D grids of up to πu(E)=1m(u)∫Eu(x)dμ(x)\pi_u(E) = \frac{1}{m(u)} \int_E u(x)d\mu(x)2 voxels using GPU PID-mean. For spatiotemporal ensembles, time is incorporated as an extra spatial axis, enabling 4D centrality rankings (Wu et al., 17 Dec 2025).


PID generalizes contour-depth analysis to fuzzy masks by substituting set inclusion with an expectation under the reference mask, retaining core theoretical guarantees (monotonicity, stability, coordinate-agnosticism, binary specialization) and enabling scalable, high-resolution ensemble visualization via linear- and GPU-accelerated algorithms. Demonstrated across synthetic and real applications in meteorology, medical imaging, and volumetric data, PID provides robust rankings, increased stability to isovalue fluctuations, and substantial computational acceleration relative to prior approaches (Wu et al., 17 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Probabilistic Inclusion Depth (PID).