Papers
Topics
Authors
Recent
2000 character limit reached

Structured Uncertainty Similarity Score

Updated 7 December 2025
  • SUSS is a family of metrics that quantifies similarity via structured, model-specific decompositions in both vision and clinical applications.
  • The approach uses probabilistic models and ranking-based evaluations to offer interpretable, localized uncertainty measurements and error attributions.
  • Empirical findings demonstrate SUSS outperforms classical methods, delivering robust calibration in image quality and reliable uncertainty stratification in survival analysis.

The Structured Uncertainty Similarity Score (SUSS) is a family of statistical metrics for quantifying similarity or uncertainty based on the structural correspondence between data representations and their predicted behavior under learned stochastic or discriminative models. SUSS instantiates as two distinct frameworks: a probabilistic, interpretable perceptual metric for image comparison in computer vision (Seidler et al., 3 Dec 2025), and as an individual-uncertainty quantification index for patient-level survival models in clinical prediction (Wang et al., 2023). Each variant formalizes similarity via internally structured, model-specific decompositions and ranking-based or probabilistic evaluation schemes, applicable both to deep learning and statistical modeling.

1. Probabilistic Perceptual Similarity in Computer Vision

SUSS for image similarity is grounded in a generative, self-supervised probabilistic model that decomposes an image XX into KK perceptual components—such as multi-scale luminance and chrominance channels. For each component kk, the model predicts a structured multivariate Normal distribution over human-imperceptible perturbations:

X~kN(μk(X),Σk(X))\tilde{X}_k \sim \mathcal{N}\bigl(\mu_k(X),\,\Sigma_k(X)\bigr)

with both mean μk(X)Rnk\mu_k(X)\in\mathbb{R}^{n_k} and covariance Σk(X)Rnk×nk\Sigma_k(X)\in\mathbb{R}^{n_k\times n_k} parameterized as image-dependent functions. To ensure tractability and local interpretability, the precision matrix Σk1\Sigma_k^{-1} is represented via a sparse Cholesky factor Lk(X)L_k(X), such that

Σk(X)=[Lk(X)Lk(X)]1\Sigma_k(X) = [L_k(X)L_k(X)^\top]^{-1}

where LkL_k is lower-triangular and only nonzero in local neighborhoods. This structure supports localized decorrelation and enables efficient log-density computation.

Given a reference image XX and a test image YY, each component yields a residual vector rk=Ykμk(X)r_k = Y_k - \mu_k(X). The log-likelihood under the kthk^{th} component’s Gaussian is

logpk(rk)=12(rkΣk1rk)12log2πΣk\log p_k(r_k) = -\frac12\,\bigl(r_k^\top\Sigma_k^{-1}\,r_k\bigr) - \frac12 \log|2\pi\,\Sigma_k|

with the Mahalanobis term efficiently computable as Lkrk22\|L_k^\top r_k\|^2_2, and logΣk=2ilogLk,ii\log |\Sigma_k| = -2 \sum_i \log L_{k,ii}.

The global SUSS score between XX and YY is a weighted sum:

SUSS(X,Y)=k=1Kwklogpk(rk)\mathrm{SUSS}(X,Y) = \sum_{k=1}^K w_k\,\log p_k(r_k)

where nonnegative weights wkw_k are learned using human-labeled pairwise preference data, via cross-entropy loss applied to difference-of-score logits on two-alternative forced choice (2AFC) triplets.

2. Self-Supervised and Human-Calibrated Learning for Perceptual Models

The model for SUSS in perceptual tasks is trained by maximizing the likelihood of small, human-imperceptible augmentations Ts,l(X)T_{s,l}(X) at multiple scales and augmentation levels. The generative goal is to encourage high log-probability for such minimally-distorted variants:

LSUPN=s=1Sl=0Lwl[logp(Ts,l(X)X~s,l)]\mathcal{L}_{\mathrm{SUPN}} = \sum_{s=1}^S \sum_{l=0}^L w_l \left[-\log p\bigl(T_{s,l}(X) \mid \tilde{X}_{s,l}\bigr)\right]

with wl=1/(l+1)w_l = 1/(l+1) favoring stricter invariances.

Image-specific whitening transforms, instantiated as zk=Lk(X)rkz_k = L_k(X)^\top r_k, provide explicit insight into perceptually salient residuals: high-magnitude zkz_k coordinates correspond to pixel neighborhoods and features that the trained SUSS model deems important for similarity judgments. The sparsity and locality of LkL_k (implemented using U-Net architectures with sparse connectivity) accentuate model transparency.

3. Sampling-Based Local Explanations and Generative Introspection

Sampling from the structured component Gaussians enables visualization and exploration of perceptually plausible images in the vicinity of XX:

ϵN(0,I),X^k=μk(X)+Σk1/2ϵ=μk(X)+Lk(X)Tϵ\epsilon \sim \mathcal{N}(0, I), \qquad \hat{X}_k = \mu_k(X) + \Sigma_k^{1/2} \epsilon = \mu_k(X) + L_k(X)^{-T} \epsilon

By generating images X^k\hat{X}_k at various quantiles of the log-likelihood, one can empirically demonstrate the tightness of each component’s invariance and provide localized, human-interpretable error attributions.

4. Empirical Performance and Benchmarks

On 2AFC human perceptual benchmarks (BAPPS, PieAPP, PIPAL), SUSSBase (without human fine-tuning) outperforms classic metrics such as PSNR, SSIM, and MS-SSIM, and approaches the accuracy of deep feature-based metrics such as LPIPS. With fine-tuning (e.g., SUSSBAPPS-RH, SUSSPieApp-RH), SUSS achieves 62–64% 2AFC accuracy on BAPPS (LPIPS ∼68%), and exhibits competitive Spearman and PLCC on PieAPP and PIPAL.

On the KADID-10k dataset, SUSS exhibits strong perceptual calibration across blur, noise, and compression categories, yielding the lowest category-wise KL divergences to human mean opinion scores (MOS), indicating uniform distance assignments aligned with human percepts. Violin plots of SUSS distributions show tight demarcation between imperceptible and clearly perceptible distortions.

As a training loss, SUSS ensures stable optimization and artifact-free image reconstructions, matching or exceeding the qualitative sharpness and cleanliness of results produced by LPIPS and SSIM losses, while possessing formally convex local structure (by Mahalanobis norm properties) and Lipschitz-continuous gradients (Seidler et al., 3 Dec 2025).

5. Patient-Level Uncertainty Quantification in Survival Models

A distinct SUSS framework is defined for uncertainty quantification in survival prediction models for metastatic brain tumor patients (Wang et al., 2023). For an individual patient x0x_0, SUSS assigns a certainty score based on concordance between two rank orderings across the training set:

  1. Feature-space similarity ranking: For each training point xix_i, compute a feature-wise dissimilarity loss Lpatient(xi,x0)L_{\text{patient}}(x_i,x_0) (combining clinical nomogram differences and feature mismatch counts), and create an ascending patient rank psr(i)psr(i).
  2. Prediction-space (model output) ranking: Compute model predictions y^0=f(x0)\hat{y}_0 = f(x_0) and y^i=f(xi)\hat{y}_i = f(x_i), average train-patient predictions in clusters grouped by psrpsr, then calculate squared prediction error to x0x_0’s prediction, ranking these as msr(j)msr(j).
  3. Pairwise concordance (C-index): The patient’s SUSS score is the fraction of group pairs (i,j)(i,j) for which the orderings agree:

SUSS(x0)=1(k2)1i<jkI[(gsr(i)gsr(j))(msr(i)msr(j))>0]\text{SUSS}(x_0) = \frac{1}{\binom{k}{2}}\sum_{1 \leq i < j \leq k} \mathbb{I}[(gsr(i)-gsr(j))(msr(i)-msr(j)) > 0]

where kk is the number of patient groups.

Values near $1$ indicate high agreement (prediction is similar to most feature-similar patients—low uncertainty); values near $0.5$ denote low information (prediction does not track with feature proximity).

6. Model-Level Uncertainty and Empirical Findings in Clinical Prediction

Model-level uncertainty is quantified via the increase in time-dependent AUC (C-index) that results from restricting evaluation to patients above a given SUSS threshold uu:

Uncertaintymodel=maxuAUC(SUSSu)AUCbaselineAUCbaseline\text{Uncertainty}_{\text{model}} = \frac{\max_u \text{AUC}(SUSS \geq u) - \text{AUC}_{\text{baseline}}}{\text{AUC}_{\text{baseline}}}

Empirical studies on 1383 brain metastasis patients and several survival models (CoxPH, CSF, NMTLR) show coherence with this metric:

  • NMTLR exhibits the lowest uncertainty (e.g., \sim1.6% on ICP, \sim2.0% on OS), followed by CSF and CoxPH.
  • Uncertainty is lowest on endpoints with simple progression (ICP) and highest on complex composites (ICPD).
  • Restricting test sets by high SUSS thresholds yields time-dependent AUC gains of up to 15–20% relative to baseline, indicating SUSS effectively stratifies by predictive certainty.

Pseudocode is provided in (Wang et al., 2023) for reproducible computation of patient-level SUSS.

7. Summary and Interpretability

The SUSS framework, across both domains, is characterized by the following properties:

  • It formalizes similarity as either a log-likelihood under a structured local generative model (computer vision) or as a ranking-concordance index (clinical prediction).
  • It is explicitly probabilistic, interpretable, and enables localized explanations through whitening transformations or group-wise stratification.
  • Training leverages self-supervised invariance (vision) and domain-appropriate feature metrics (clinical).
  • Empirically, SUSS delivers competitive alignment with human judgment, robust perceptual calibration, and reliable uncertainty quantification without reducing to opaque feature metrics.

References: (Seidler et al., 3 Dec 2025, Wang et al., 2023).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Structured Uncertainty Similarity Score (SUSS).