Unilaterally Truncated Gaussian Distributions
- Unilaterally truncated Gaussian distributions (UTGDs) are defined by conditioning a Gaussian variable to exceed a fixed threshold, yielding closed-form moments and specialized sampling methods.
- They are widely applied in state estimation, control systems, and machine learning to handle non-negativity constraints and censored data.
- Advanced estimation and simulation techniques facilitate precise parameter recovery and variance calibration, ensuring rapid convergence and robust modeling.
A unilaterally truncated Gaussian distribution (UTGD) arises when a Gaussian random variable is restricted to a semi-infinite interval—typically, values greater than a fixed threshold. UTGDs are ubiquitous in applications that impose hard physical or logical constraints, including state estimation with non-negativity requirements, truncated noise processes in control, structured graphical models with rectifying nonlinearities, and the analysis of incomplete or censored data. Theoretical paper of UTGDs centers on their closed-form moments, characterizations of maximal variance, efficient parameter estimation, sampling algorithms, and implications for statistical learning, especially in situations requiring concentration inequalities or sub-Gaussian analysis.
1. Definition and Core Properties
Let and fix a threshold . The unilaterally truncated (lower truncated) Gaussian distribution is the law of conditional on . Its density is given by
where is the standard normal density, and its cumulative distribution. The normalizing constant is with the standardized truncation point .
Key closed-form results:
- Mean:
- Variance:
- Moment Generating Function (MGF):
2. Maximal Variance, Bounds, and Calibration
A key analytical result, established in "The Maximal Variance of Unilaterally Truncated Gaussian and Chi Distributions" (Petrella, 14 Nov 2025), is: where is the fixed mean of the truncated distribution, and is the threshold. The supremum is achieved as the location parameter (at fixed and ), reflecting the fact that pushing the Gaussian far into the left tail—while calibrating such that the mean remains fixed—maximizes variance. For fixed cutoff , the variance can always be expressed in terms of , , and .
Approximations are used for parameter inference and calibration:
- For scaled shift over ,
and
with explicit polynomials in (see Table VII of (Petrella, 14 Nov 2025)).
Moment-intersecting and point-slope methods allow highly precise parameter recovery, typically outperforming naive least-squares. Methods converge rapidly (relative errors observed). Calibration workflows can thus achieve accurate matching of empirical mean-variance pairs to UTGD parameters.
3. Efficient Sampling Algorithms
Efficient simulation from UTGDs is critical for probabilistic modeling, inference, and Monte Carlo methods. A suite of optimized algorithms are established in "Fast simulation of truncated Gaussian distributions" (Chopin, 2012):
- For univariate UTGDs, a table-based accept-reject method achieves acceptance probability for in and remains highly efficient even for large-. The approach partitions into rectangles, leveraging precomputed tables for function evaluations, thereby minimizing floating-point overhead.
- Expected operations per sample: , with most draws requiring only a uniform variate, two multiplications, and a comparison.
For bivariate and higher dimensions, stratified accept–reject proposals (S, S, M, M) or block Gibbs strategies are used. For example, in bivariate semi-infinite truncation, the acceptance rate always exceeds $0.5$ for all admissible parameterizations.
For conditional UTGD sampling in graphical models or Gibbs samplers (e.g., in RTGGM architectures), the standard inverse CDF method or these accept-reject samplers are typically used. All proposals remain efficient when parameters or truncation change dynamically.
4. Parameter Estimation and Statistical Inference
UTGDs present unique challenges in parameter estimation, especially under unknown truncation points or multi-parameter settings. "Efficient Truncated Statistics with Unknown Truncation" (Kontonis et al., 2019) addresses the problem of inferring from i.i.d. UTGD samples:
- The maximum likelihood (ML) landscape is non-concave in general, motivating a two-stage procedure:
- Set recovery (support estimation) via Hermite polynomial expansions.
- Parameter extraction by recasting as a convex optimization (in re-parameterizations such as , ).
Alternatively, moment-based estimators solve the system defined by the empirical mean and variance of the samples against the UTGD closed-form moment expressions. For samples, one can reconstruct the parameters to arbitrarily small , leveraging strong convexity in 1D and analytic gradients.
In the context of state-space models (e.g., process/measurement noise partially observed due to one-sided constraints), EM algorithms are adapted for truncated Gaussian process and measurement noise (González et al., 25 Jul 2025). This involves moment-matching at the M-step, and the use of Monte Carlo or particle smoothers to estimate sufficient statistics under truncated innovation processes.
5. Applications in Graphical Models and Machine Learning
UTGDs naturally arise in graphical models with non-negativity or rectification constraints, as in Restricted Truncated Gaussian Graphical Models (RTGGM) and related deep learning architectures (Su et al., 2016):
- In bipartite Gaussian graphical models, imposing non-negativity by replacing hidden variables with UTGDs results in conditionally independent, tractable univariate truncated distributions.
- The conditional mean of a UTGD, as a function of its natural parameter, provides a smoothed ReLU activation: , converging to as .
- Deep extensions allow parameter sharing and unsupervised pre-training for feedforward ReLU networks using UTGD-based conditional means, enabling the transfer of inference mechanisms from probabilistic models to deterministic neural architectures.
Contrastive divergence training of RTGGMs requires repeated sampling and expectation computation over UTGDs. The efficient sampling schemes from (Chopin, 2012) and closed analytic formulas for moments facilitate tractable learning.
6. Sub-Gaussianity, Concentration, and Variance Proxy
Applications of concentration inequalities or risk bounds frequently demand a sub-Gaussian variance proxy. UTGDs, while sub-Gaussian, are not strictly sub-Gaussian unless truncated symmetrically about the mean (Barreto et al., 13 Mar 2024):
- For , the optimal sub-Gaussian variance parameter is , which always exceeds the true variance of the truncated distribution, except in symmetric truncations.
- The variance proxy informs sharp concentration inequalities:
regardless of the truncation point or mean shift.
A plausible implication is that in applications requiring finely tuned risk or uncertainty quantification (e.g., safety-critical control), the difference between the true variance and sub-Gaussian proxy must be accounted for, unless symmetric truncation can be assumed.
7. Practical and Computational Considerations
The computational tractability of UTGDs is foundational to their practical use:
| Aspect | Closed-Form? | Computational Feature |
|---|---|---|
| PDF, CDF | Yes | Single/bi-dimensional integrals |
| Mean, Variance | Yes | Requires evaluation of , |
| Higher Moments/MGF | Yes | Analytic or via numerical integration |
| Sampling | Yes (algorithms) | O(1) per sample (Chopin, 2012) |
| Parameter Estimation | Yes | Convex Moment/ML methods (Kontonis et al., 2019) |
| Sub-Gaussian Proxy | Yes | Explicitly (Barreto et al., 13 Mar 2024) |
Efficient numerical routines for the normal CDF and its inverse are essential for both simulation and estimation. Precomputing and storing tables, as in (Chopin, 2012), further accelerates sampling. For higher-dimensional truncations, block-Gibbs or stratified rejection methods are most viable when the problem structure (e.g., Markov random field sparsity, parameter sign patterns) permits.
Explicit recognition of upper variance bounds (Petrella, 14 Nov 2025) and robust approximation formulae enables direct model calibration and identifiability in practical inverse problems, especially under limited or censored data.
References
- (Petrella, 14 Nov 2025) The Maximal Variance of Unilaterally Truncated Gaussian and Chi Distributions
- (Chopin, 2012) Fast simulation of truncated Gaussian distributions
- (Kontonis et al., 2019) Efficient Truncated Statistics with Unknown Truncation
- (Su et al., 2016) Unsupervised Learning with Truncated Gaussian Graphical Models
- (González et al., 25 Jul 2025) Truncated Gaussian Noise Estimation in State-Space Models
- (Barreto et al., 13 Mar 2024) Optimal sub-Gaussian variance proxy for truncated Gaussian and exponential random variables
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free