Bayesian Group Global-Local Shrinkage Prior
- Bayesian group global-local shrinkage prior is a flexible hierarchical model that uses group-specific local scales and a global parameter for adaptive variable selection in high dimensions.
- It employs a polynomial-tailed modification to strongly shrink noise while retaining large signals, ensuring robust group selection and optimal estimation.
- The approach features an efficient half-thresholding rule that outperforms traditional spike-and-slab and group LASSO methods in both empirical and theoretical studies.
The Bayesian group global-local shrinkage prior is a flexible class of hierarchical priors developed to address high-dimensional variable selection and estimation problems where covariates or coefficients are structured in groups. Building on the success of continuous global-local shrinkage approaches such as the horseshoe, these priors enable simultaneous adaptation to group-level sparsity and signal strength by assigning each group a local scale parameter that interacts multiplicatively with a global shrinkage parameter. This construction yields strong shrinkage for groups with negligible effects, while preserving estimation accuracy for groups that contain genuine signal, and admits polynomial tails for detection of large effects. A salient variant of this framework uses a “modified global-local” structure that induces polynomially decaying tails for the group coefficients, optimally balancing the need to shrink noise while retaining prominence for large signals. Theoretical and empirical investigations document favorable selection and estimation properties, with performance rivaling or exceeding canonical two-group spike-and-slab approaches in group selection under high-dimensional scaling (Paul et al., 2023).
1. Hierarchical Model Formulation
Let be the response and the design matrix concatenated from groups ( of size ). The target coefficients are partitioned as with , . The Gaussian linear model is
The group global-local shrinkage prior adopts the following form (‘global-local g-prior’): with slowly varying. The global scale is either set as a tuning parameter when the group sparsity level is known, or assigned a prior (full or empirical Bayes estimation) such as a truncated half-Cauchy. The variance is given a Jeffreys’ prior () in practice or sometimes fixed for theoretical analysis.
The joint prior density is thus explicitly
2. Polynomial-tailed Modification and Tail Properties
The critical feature distinguishing the Bayesian group global-local shrinkage prior is its polynomial-tailed structure on the group coefficient vector. Specifically, the local scales are described by
with Karamata slowly varying. The resulting marginal prior on is
inducing heavy (polynomial) tails. The exponent directly modulates the tail decay: small yields heavier tails, which encourages concentration of mass at zero, but does not overly penalize large signals. Special cases include the horseshoe prior and other 'one-group' polynomial-tailed forms as in Tang et al. (2018) (Paul et al., 2023).
3. Selection via the Half-Thresholding Rule
A distinguishing feature is the explicit computationally tractable selection rule. For a block-orthogonal design ( for ), the posterior mean of each group factors as
Let denote the shrinkage factor. The half-thresholding rule declares group active if
This threshold rule is fully specified by the posterior mean, requiring no marginal likelihood computation or combinatorial search, and is adaptive to signal strength and group size (Paul et al., 2023).
4. Global Scale () Selection Strategies
The choice of global shrinkage parameter is pivotal for controlling the trade-off between bias and variance:
- Known sparsity: If the proportion of active groups, , is known, a near-optimal choice is for small .
- Empirical Bayes: When sparsity is unknown, an empirical Bayes estimator (after van der Pas et al.) is used:
with , , .
- Full Bayes: A truncated half-Cauchy prior, on , ensures that concentrates in the oracle regime.
Adaptation to unknown sparsity is thus achieved without combinatorial model enumeration (Paul et al., 2023).
5. Theoretical Guarantees
Let , , and total number of groups with .
- Variable selection consistency: Under standard regularity (group designs with bounded eigenvalues, signals not vanishing, bounded group size), and suitable choices of (e.g., ), the half-thresholding rule is selection consistent:
- Oracle estimation rates: For any unit vector with support in and under further eigenvalue and signal bounds, the estimator achieves asymptotic normality at the minimax-optimal rate:
- These properties extend to empirical Bayes and full Bayes strategies, requiring only mild technical modifications for or alternative empirical selection (Paul et al., 2023).
6. Empirical Performance and Method Comparisons
Extensive simulations were conducted across nine regimes (varying , signal strength, group sizes, orthogonality of design). Principal comparators include:
- Modified Group Horseshoe (MGH) and Group Horseshoe (GH)
- Empirical Bayes MGH-EB1/EB2, Full Bayes MGH-FB
- Two-group spike-&-slab (GSD-SSS, BGL-SS), Group LASSO
Performance metrics: Misclassification Probability (MP), False Positive Rate (FPR), True Positive Rate (TPR).
Findings:
- MGH (and GH) priors yield the lowest MP and FPR and highest TPR, especially under weak or moderate signal regimes and smaller .
- Empirical Bayes and Full Bayes variants match nearly the oracle-tuned half-thresholding rule.
- Two-group priors (GSD-SSS, BGL-SS) require stronger signal or larger to achieve similar performance.
- Group LASSO tends to overselection (high FPR) except under strong signals or large .
This demonstrates that one-group, polynomial-tailed global-local priors with the half-thresholding rule match or outperform classical two-group spike-and-slab or penalized likelihood group selection methods while simultaneously offering substantial computational and inferential simplicity (Paul et al., 2023).
7. Broader Context and Extensions
The group global-local paradigm extends naturally to multilevel and network-structured problems (e.g., multivariate responses (Kundu et al., 2019), multilevel models with joint control via Dirichlet or Beta-P distributions (Aguilar et al., 2022), gene network estimation (Leday et al., 2015), and network-based classification (Guha et al., 2020)). Each variant tailors the local scales to correspond to natural groupings and adapts the thresholding or selection scheme appropriately. Notably, the polynomial-tailed forms enable robust signal recovery in ultra-high-dimensional or weak-signal settings and facilitate practical model selection via continuous shrinkage without the need for discrete model search. In summary, the Bayesian group global-local shrinkage prior furnishes a unified, theoretically rigorous, and empirically validated approach to sparse estimation and group selection across a broad array of high-dimensional settings.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free