Conditional Hyvarinen Score Differences
- Conditional Hyvarinen score differences are statistics derived from proper scoring rules that compare the local sharpness of two conditional models without relying on normalizing constants.
- They leverage gradients and Laplacians of log-conditional densities to enable robust analysis in scenarios where likelihoods are inaccessible, such as high-dimensional and unnormalized data.
- These score differences underpin advanced methods like score-based CUSUM for quickest change detection and Bayesian model selection, providing an effective alternative to traditional likelihood-based approaches.
Conditional Hyvärinen score differences are a class of statistics derived from proper scoring rules that compare the local sharpness of two conditional (or transition) models for possibly high-dimensional or unnormalized data-generating processes. They generalize the paradigm of likelihood-based comparison to contexts where likelihoods are inaccessible, focusing on gradients and Laplacians of log-conditional densities, and form the analytic core of several recent methods for estimation, model comparison, and quickest change detection in both independent and time-series or Markovian settings. Their key property is invariance to unknown normalizing constants, enabling application to unnormalized statistical models and implicit generative mechanisms.
1. Definitions and Mathematical Structure
The conditional Hyvärinen score for a model with one-step predictive (or transition) density is defined as
where and denote the gradient and Laplacian with respect to the current observation . For conditional models and , the conditional Hyvärinen score difference at time is
where . For unnormalized models, where is only available up to an intractable normalizing constant, the score remains computable as it depends solely on derivatives of the log-density (Wu et al., 2023, Chen et al., 6 Nov 2025, Shao et al., 2017).
In the Markovian context, one compares candidate transition kernels (pre-change) and (post-change) via the score difference
with conditional Hyvärinen scores defined with respect to at . These differences play roles analogous to log-likelihood ratio increments in classical sequential analysis, but for energy-based or unnormalized models (Chen et al., 6 Nov 2025, Wu et al., 2023).
2. Statistical Interpretation and Information Divergence
The conditional Hyvärinen score is a strictly proper, scale-invariant scoring rule. Its expected value connects to the Fisher divergence (for marginal or joint models) or to the conditional Fisher divergence in the Markov setting: For score differences under the true pre-change kernel , the mean
is negative, while after change it becomes positive under the alternative. This sign-switch ensures that, as in log-likelihood ratio settings, accumulated score-difference statistics naturally drift upward post-change and downward pre-change, a property essential for sequential detection or model comparison (Chen et al., 6 Nov 2025).
In the i.i.d. prediction context, the sample-averaged conditional Hyvärinen score differences converge almost surely to differences in risk divergences between the true generating measure and competing models, with
yielding a consistency theorem for model comparison procedures based on summed conditional score differences (Shao et al., 2017).
3. Applications: Change Detection, Model Comparison, and Estimation
Quickest Change Detection
Conditional Hyvärinen score differences underpin the Score-based CUSUM (SCUSUM) algorithm for quickest change detection in both i.i.d. and Markovian regimes, extending classical CUSUM to unnormalized and high-dimensional models. The test statistic recursively accumulates truncated score differences: with thresholding to control for unbounded increments. Stopping times are defined as , where is selected to control false alarms (Chen et al., 6 Nov 2025, Wu et al., 2023).
Bayesian Model Comparison
Conditional Hyvärinen score sums provide an alternative to Bayes factors, especially when models involve vague priors or intractable likelihoods. For two models with predictive densities , cumulative score differences across out-of-sample data asymptotically distinguish models by their conditional Hyvärinen risks, even in non-nested settings. Sequential Monte Carlo (SMC) methods can be used to estimate the conditional scores and their differences online, leveraging weighted particle representations of posterior parameters (Shao et al., 2017).
Linear Time Series Estimation
In Gaussian linear processes (AR, MA, ARFIMA), conditional score differences reduce to quadratic functions in prediction errors and can be used for both parameter estimation and comparison: where is the prediction error under parameter . This enables consistent and (sometimes) fully efficient estimation without recourse to normalizing constants or full likelihoods (Columbu et al., 2019).
4. Computation and Implementation
Score Estimation
Direct computation of conditional scores is generally infeasible in high dimensions or for energy-based models. These gradients are consistently estimated via conditional score matching, where a neural network is trained to minimize
exploiting integration by parts to bypass normalization (Chen et al., 6 Nov 2025).
Sequential Monte Carlo Approximation
For Bayesian model comparison and state-space inference, particle-based SMC approximates the predictive/posterior distributions and their gradients. At each time , for weighted samples , the conditional Hyvärinen score is estimated by accumulating weighted Laplacians and gradients of log-predictive densities, applying variance reduction: Resulting are accumulated for testing or model selection (Shao et al., 2017).
Truncation and Boundedness
For proper theoretical guarantees in Markov settings, increments are truncated to ensure boundedness, enabling the use of concentration inequalities (e.g., Hoeffding's inequality for Markov chains) and robustifying finite-sample behavior (Chen et al., 6 Nov 2025).
5. Theoretical Properties
Consistency and Optimality
Under regularity conditions (ergodicity, differentiability, bounded envelope conditions, and posterior concentration), conditional Hyvärinen score-based procedures are consistent: sample averages converge to the true risk differences, and, in change detection, drift direction and detection delay scalings are preserved relative to classical KL-based procedures. For Gaussian models, Fisher and KL divergences coincide, making Hyvärinen and likelihood-based approaches identical in efficiency (Columbu et al., 2019, Wu et al., 2023, Shao et al., 2017).
Control of Error Rates
For change detection, thresholds on accumulated (truncated) scores yield exponential lower bounds for mean time to false alarm and asymptotic upper bounds for detection delay. Under uniform ergodicity (Doeblin’s condition) and light-tail conditions, these bounds ensure rigorous performance guarantees in Markov settings, with explicit expressions for ARL and WADD in terms of the score-difference drift and boundedness constants (Chen et al., 6 Nov 2025).
Extensions to Discrete Data
On discrete state spaces, a finite-difference analogue of the Hyvärinen score (using central and one-sided differences) provides a proper, local, homogeneous score for model comparison, maintaining the same prequential and asymptotic properties as in the continuous setting (Shao et al., 2017).
6. Practical Significance and Scope
Conditional Hyvärinen score differences enable a unified, likelihood-free framework for model comparison, parameter estimation, and sequential detection in high-dimensional, intractable, or unnormalized models. They are well-matched to modern settings such as high-dimensional transition kernels, energy-based generative models, and time-series with latent or implicit dynamics. Key empirical findings include:
- SCUSUM achieves false alarm control matching classical CUSUM, with detection delays within a constant factor, outperforming kernel MMD-based tests in high-dimensional, non-Gaussian, and unnormalized regimes (Wu et al., 2023).
- In linear-Gaussian models, conditional Hyvärinen estimators are equivalent to OLS/conditional MLE in AR and nearly as efficient in ARFIMA and MA models without requiring full covariance evaluation (Columbu et al., 2019).
- SMC-based estimation of score differences is feasible for both tractable and intractable models, with demonstrated model selection consistency in large samples (Shao et al., 2017).
A plausible implication is expanding application domains for sequential analysis, Bayesian comparison, and score-based learning to settings previously inaccessible due to normalization or likelihood challenges.