Shifted-Truncated Gamma (G-STG) Prior
- The Shifted-Truncated-Gamma (G-STG) prior is a finite-interval modification of the gamma distribution that enables effective shrinkage and variable selection in high-dimensional GLMs.
- It applies truncation and shifting to achieve analytic tractability, closed-form normalization, and robust moment computations essential for precise Bayesian inference.
- The prior supports efficient sampling and Laplace approximations, enhancing model selection consistency and predictive performance across various scientific applications.
The Shifted-Truncated-Gamma (G-STG) prior is a finite-interval modification of the classical gamma distribution, intensively utilized in Bayesian model selection for Generalized Linear Models (GLMs) to parameterize shrinkage or regularization factors. This prior is defined by truncating the support of the gamma distribution to a bounded domain and optionally shifting its lower bound. The G-STG prior is notable for its analytic tractability, local geometric adaptation to model curvature, and satisfaction of essential Bayesian consistency desiderata, rendering it particularly effective for variable selection and model averaging tasks in high-dimensional inference settings.
1. Definition and Mathematical Formulation
The untruncated gamma distribution with shape parameter and rate parameter is given by
The G-STG prior introduces a lower truncation and upper truncation , yielding the density
where is the normalization constant and is the upper incomplete gamma function. The G-STG prior is a member of the truncated Compound Confluent Hypergeometric (tCCH) family, a general class encompassing standard hyper-g, Beta-prime, and various robust priors (Li et al., 2015).
When used to regularize GLM coefficient shrinkage, the typical G-STG parameterization is over , where controls the scaling of the Zellner-style prior covariance. In canonical form for ,
with governing the tail behavior and the overall shrinkage (“unit information” scaling with sample size is common).
2. Induced Prior on Regularization Parameter and Change of Variables
Considering with , the transformation yields and . The induced prior density on is
A customary “Gamma-mixing-in-g” representation follows from this derivation, but parameterization in terms of optimizes analytic marginal likelihood construction (Li et al., 2015).
3. Properties and Statistical Implications
Raw Moments
For ,
Specifically,
with variance given by the usual formula (Zaninetti, 2014).
Normalization
The normalizing constant can be equivalently written using lower incomplete gamma functions: This closed-form normalization admits efficient, numerically stable evaluation.
4. Bayesian Embedding and Computational Strategies
Prior Construction
When embedding as a Bayesian prior, reflect known bounds, while are selected to match prior means and variances via the truncated gamma moment equations. Embedding as the prior for a GLM “g-prior” regularization factor, is scaled to ensure model selection and intrinsic consistency. Default choices , (sample size) are recommended for their robust tail behavior and analytic tractability (Li et al., 2015).
Posterior Update
With likelihoods possessing a gamma kernel (e.g., Poisson, exponential models), the resulting posterior for maintains the truncated gamma form, with updated parameters and . The only non-conjugate aspect is the new normalization.
Monte Carlo Sampling
Samples may be generated by rejection sampling from the untruncated gamma, accepting only those within . Inverse-CDF sampling is more efficient: draw and solve using root-finding methods. These steps generalize naturally to Gibbs or Metropolis algorithms for hierarchical models (Zaninetti, 2014).
5. Application in Generalized Linear Models and Marginal Likelihoods
Within the GLM context, the G-STG prior on enables analytic marginal likelihoods via Laplace approximation. If ,
where is the observed Wald statistic. The Bayes factor comparing models follows in closed form, enabling Compound Hypergeometric Information Criteria (CHIC) as a straightforward generalization of well-known Bayesian criteria. CHIC is expressed as
6. Local Geometric Adaptation and Theoretical Justification
The G-STG prior inherits local geometric properties from the information metric of the GLM:
- The prior covariance adapts to the curvature of the log-likelihood at the MLE, de-emphasizing directions with high information.
- Measurement invariance is maintained, as transforms of scale the covariance equivariantly.
- The prior volume element , integrated over , is finite for , guaranteeing propriety.
7. Model Selection Consistency, Recommended Defaults, and Empirical Behavior
Selection consistency is guaranteed by scaling , placing sufficient prior mass on large and ensuring Bayes factors behave correctly under any fixed alternative or under the null. Intrinsic consistency follows as remains diffuse with increasing . For estimation, posterior shrinkage converges so that posterior means approach the MLE, preserving unbiasedness in large samples.
Defaults are:
- or $1$ for robust heavy-tailed behavior.
- for just-identifiable shrinkage and model selection consistency.
- yields closed-form -function Bayes factors, robust inference, low computational overhead, and satisfaction of all Bayarri et al. desiderata (Li et al., 2015).
In high-dimensional applications, is slightly more stable for sparse signals; improves prediction with moderate signals. The analyst can tune and for application-specific tail and shrinkage regimes.
8. Practical Examples and Performance
In astronomical data modeling (e.g., stellar mass functions), the right–left truncated gamma (G-STG) dramatically outperforms lognormal and four-power–law models in , AIC, and Kolmogorov–Smirnov criteria. For example, in NGC 6611 with , , fit yields , ; for NGC 2362, , (Zaninetti, 2014).
The G-STG prior thus offers closed-form recipes (PDF, CDF, moments, normalization), efficient parameter estimation, and Bayesian conjugate embedding, facilitating robust model selection and prediction in a variety of scientific inference applications.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free