Conditional Gaussian Equivalence
- Conditional Gaussian equivalence is a framework that determines when conditional operations or projections on Gaussian or near-Gaussian models yield statistically equivalent distributions.
- It applies to diverse settings—including central limit theory, graphical models, functional fields, and quantum information—facilitating robust inference and model identifiability.
- Rigorous bounds using Wasserstein distance and relative entropy underpin algorithmic advances in score-based learning and empirical risk minimization in high-dimensional contexts.
Conditional Gaussian equivalence is a multifaceted concept that arises in the study of when probabilistic models, distributions, or random fields characterized by (partially or fully) Gaussian structure, become equivalent under certain conditional operations, transformations, or parameter regimes. The topic encompasses central limit theorems for conditional projections, equivalence in graphical models, functional field theory, mixture models with discrete conditioning, and universality principles in high-dimensional statistics.
1. Conditional Gaussian Equivalence in Central Limit Theory
Conditional Gaussian equivalence is rigorously formalized in the context of high-dimensional random projections. Given a random vector , and a random projection matrix with i.i.d. entries, one considers the conditional law of the projection given . The principal quantitative results establish explicit bounds for the deviation of from a reference Gaussian law (having the same mean and covariance), measured in quadratic Wasserstein distance and relative entropy (Reeves, 2016):
- Quadratic Wasserstein Bound:
where , , quantifies norm deviations, and encode low-order chaos moments.
- Relative Entropy Bound (for the noisy projection ):
with .
These theorems show that, provided (projection dimension) grows slowly relative to , and that deviations from Gaussianity in (via , ) are controlled, the conditional law is close to its Gaussian surrogate in a strong probabilistic sense. Talagrand's inequality relates relative entropy to Wasserstein through the conditional law. Applications encompass the analysis of random linear estimation schemes, where small information divergence signals near-optimality for compressed sensing protocols (Reeves, 2016).
2. Gaussian Equivalence in Graphical and Structural Models
In the context of linear Gaussian structural equation models, conditional Gaussian equivalence is formalized via the concept of distribution equivalence between directed graphs. Two structures are declared equivalent if they induce the same parametric family of precision matrices on observed data:
where parametrizes all precision matrices generated by well-posed linear Gaussian models with support imposed by (Ghassami et al., 2019). For acyclic graphs (DAGs), this aligns with classical Markov equivalence (same skeleton/v-structures), but for general cyclic graphs, equivalence is characterized via analytic methods (Givens rotations acting on ) and graphical transformations (parent reductions/exchanges, cycle reversions). Implementing this framework yields efficient algorithms for score-based structure learning and identifiability assessment in both cycle-free and cyclic settings.
3. Functional Gaussian Equivalence for Spherical Random Fields
For isotropic Hilbert-valued Gaussian fields on the sphere , conditional Gaussian equivalence refers to equivalence of induced Gaussian measures on the path space , governed by the so-called functional Feldman-Hájék criterion. Covariance operators admit a Schoenberg spectral decomposition,
with trace-class operators on , and projecting onto degree- harmonics. Equivalence is then determined by Hilbert-Schmidt summability:
The operator-valued criterion generalizes classical scalar results; in particular, equivalence of all scalar projections is subsumed and dominated by the functional norm condition (Caponera et al., 28 Nov 2025). Applications include model identifiability and robust inference in functional data analysis and spatial statistics on manifolds.
4. Conditional Gaussian Mixture Equivalence and Categorical Probability
Conditional Gaussian equivalence is realized diagrammatically in the analysis of hybrid probabilistic models—specifically, conditional Gaussian mixture models (CGMMs) where discrete random variables select among continuous Gaussian distributions. Within categorical probability frameworks, equivalence is defined by the equality of kernel representations induced by string diagrams. The calculus employs a two-colour string diagram syntax, compositional semantics (strict symmetric monoidal functor to kernels), and a finite set of equational axioms (comonoid laws, naturality, arrangement of if-then-else branching, barycentric associativity). The main structural theorem affirms that two diagrams are equivalent (i.e., represent the same CGMM) iff they are identified by the equational theory (Torres-Ruiz et al., 6 Oct 2025). This yields complete soundness and completeness results for reasoning about conditional Gaussian mixtures in compositional structures.
5. Conditional Gaussian Equivalence in High-Dimensional Learning and Empirical Risk Minimization
Conditional Gaussian equivalence arises in high-dimensional statistics, particularly in settings where Gaussian equivalence theory (GET) fails due to non-universal behavior of random feature regimes. When both data and feature dimensions scale polynomially, and the target function depends on a low-dimensional non-Gaussian subspace (e.g., a signal direction ), naive replacement by Gaussian surrogates (GET) is inadequate. The conditional Gaussian equivariant (CGE) model repairs this deficiency by explicitly conditioning on the low-dimensional non-Gaussian component while Gaussianizing the orthogonal complement (Wen et al., 3 Dec 2025):
where non-Gaussian Hermite chaos features depend on , and the remaining features become Gaussian. Empirical risk minimization with these CGE features yields sharp asymptotic formulas for training and test error, substantiated by two-phase Lindeberg swapping and Wiener-chaos CLTs. Practical implications include refined universality principles that condition on low-rank non-Gaussian structures (Wen et al., 3 Dec 2025).
6. Conditional Gaussian Equivalence in Quantum and Continuous-Variable Information
In the quantum-classical Gaussian information context, conditional local equivalence (conditional Gaussian equivalence) is established via standard form reductions of covariance matrices under Gaussian local unitaries (GLU). Two -mode Gaussian states are equivalent under GLU iff their standard forms coincide, which are computed algorithmically via local Williamson decompositions and singular value arrangements of off-diagonal blocks (Giedke et al., 2013). Extensions to Gaussian LOCC (local operations with classical communication) introduce additional constraints and entanglement monotones based on off-diagonal determinants. Pure multi-mode states exhibit a conditional hierarchy of resource convertibility; notably, symmetric GHZ/W-type states do not universally generate all pure Gaussian states by local Gaussian operations.
7. Conditional Equivalence via Topological and Statistical Conditioning
Certain equivalence phenomena in statistical physics—such as for topologically conditioned scalar Gaussian free fields (GFF) on metric graphs—are realized by conditioning a GFF on topological events (e.g., trivial holonomy of sign clusters). Gauge-twisted GFFs are proven equivalent, up to deterministic sign flips within clusters, to ordinary GFFs conditioned on a topological event. The exact probability of equivalence is computable as a ratio of Laplacian determinants, and the equivalence holds in law for absolute values of the fields on the event (Lupu, 2022). The phenomenon extends to high dimensions, where Poissonian loop soups exhibit intensity-doubling transitions.
The landscape of conditional Gaussian equivalence thus spans the domains of central limit theory, graphical and functional models, algebraic and categorical probability, high-dimensional statistics, quantum information, and structured random fields. It enables rigorous quantification and classification of when and how conditional transformation, projection, or conditioning preserves or induces Gaussianity, and under what criteria probabilistic, statistical, or quantum objects can be equated via conditional transformations.