Gaussian Sieve Priors
- Gaussian sieve priors are hierarchical Bayesian priors that express infinite-dimensional functions using truncated orthonormal basis expansions.
- They enable adaptive nonparametric inference by selecting a variable truncation level, achieving near-minimax global L2 contraction rates.
- However, they exhibit suboptimal performance for pointwise and semi-parametric loss functions due to insufficient regularization of intermediate-frequency components.
Gaussian sieve priors are hierarchical Bayesian priors designed for adaptive nonparametric inference, especially in settings where the underlying signal or function admits a sparse or truncated orthonormal expansion. In such models, the key feature is to express the infinite-dimensional parameter (such as a function or spectral density) in a suitable basis, and then encode prior information via a variable truncation level and, conditionally, independent Gaussian priors on the expansion coefficients. The formulation enables dimension adaptation and allows contraction rates that (up to logarithmic factors) closely track minimax optimality for certain global loss functions. Gaussian sieve priors have been rigorously analyzed for models including the Gaussian white noise model and semi-parametric Gaussian time series, revealing both their strengths in global adaptation and their limitations under pointwise or semi-parametric loss functions (Arbel et al., 2012, Kruijer et al., 2012).
1. Construction of Gaussian Sieve Priors
In the Gaussian white noise model
where is the unknown function and is standard Brownian motion, the function is expressed in an orthonormal basis as . The observations in the basis are
The Gaussian sieve prior places a hierarchical structure on the coefficients : where
- is a prior over truncation level ,
- Conditionally on , for , and for .
Canonical choices are a Poisson distribution with parameter , and for , (Arbel et al., 2012). The prior is thus a random mixture over finite-dimensional Gaussians, which induces dimension reduction and penalizes complexity via the decay of (typically for constants ).
In semi-parametric time series models, e.g., the FEXP model for Gaussian long-memory series,
a sieve prior is formulated by independently assigning:
- a (fixed) density supported in ,
- either fixed at a deterministic rate in or random with a Poisson/geometric prior,
- a distribution supported on a Sobolev ball of smoothness (Kruijer et al., 2012).
2. Posterior Contraction Rates: and Global Loss
Under mild regularity assumptions, Gaussian sieve priors yield adaptive minimax-optimal posterior contraction rates (up to log-factors) for global (or ) loss over appropriately defined Sobolev-type parameter spaces.
Specifically, for the Sobolev ball
the minimax -estimation rate is . The Gaussian sieve prior achieves
in the sense that, for any and sufficiently large,
as [(Arbel et al., 2012), Theorem 3.4, Proposition 4.1, Section 5.1]. The Bayes risk associated with the posterior mean also achieves .
The key mechanism underlying these results is a balance of approximation error (controlled by the truncation and the prior scales ) and stochastic error (arising from the noise level and the prior’s effective sample size). The proof utilizes decisive prior-mass bounds over Kullback–Leibler neighborhoods, metric entropy estimates, and non-asymptotic testing inequalities.
3. Adaptation and Suboptimality for Other Losses
While Gaussian sieve priors provide sharp global adaptation, their behavior under other loss functions can be markedly different. For pointwise (local) risk
where , the minimax rate over is . Under the Gaussian sieve prior, the attained pointwise risk decays only as
which is slower than the minimax rate by a polynomial factor in [(Arbel et al., 2012), Proposition 5.3]. The dominant error arises from intermediate-frequency coefficients that are insufficiently regularized by the sieve prior, causing excess local variance.
A similar phenomenon appears in semi-parametric estimation of long-memory parameters in time series. With random truncation priors (Poisson or geometric) on the expansion length , the contraction rate of the long-memory parameter is
whereas the minimax rate is . Only when is tuned deterministically to scale as (rather than assigned a prior) does the sieve prior attain the nearly optimal rate [(Kruijer et al., 2012), Theorems 3.2–3.4].
4. Key Technical Conditions and Proof Structure
Contraction theorems for sieve priors rest on an overview of several analytic conditions:
- KL-approximation (A1): Existence of low-dimensional truncations that approximate the true model in Kullback–Leibler divergence.
- Reverse KL– control (A2): Uniform control of KL divergence in terms of -distance around the truncation.
- Metric entropy and covering (A3): Ability to cover sieved parameter sets in -distance by -balls, which facilitates the construction of exponentially powerful tests.
- Testing (A4): Construction of tests with exponentially decaying type I and II errors for hypotheses separated in .
- Prior tails and scales (A5): Appropriate decay of , sufficient mass for scales , and tail regularity for the conditional prior densities.
The proof proceeds via a testing–prior-mass approach. The numerator of the posterior probability for "bad" sets (where the contraction fails) is controlled by a union bound over tests, while the denominator is lower-bounded by the prior mass of suitable KL neighborhoods. Contributions from large and small are handled via tail bounds on (Arbel et al., 2012).
5. Implications for Adaptive Bayesian Estimation
Gaussian sieve priors illustrate both the strengths and limitations of hierarchical Bayesian adaptivity. For global loss functions such as integrated squared error, the prior achieves adaptive minimax rates over a large class of smoothness spaces (e.g., Sobolev balls). The adaptation occurs via the random selection (or deterministic choice) of the truncation level , which balances bias and variance automatically.
However, for more localized or semi-parametric functionals (e.g., pointwise function estimation, long-memory parameter estimation), full Bayesian adaptation via a sieve prior leads to a trade-off in contraction rate. The failure to attain optimal local rates is due to a fundamental mismatch between the global regularization induced by the sieve structure and the localized risk structure of the problem (Arbel et al., 2012, Kruijer et al., 2012).
This behavior underscores the necessity of careful prior design or tuning for objectives that go beyond global estimation: for instance, fixing the sieve truncation to match the minimax-optimal dimension achieves nearly optimal rates for long-memory parameters in time series, whereas data-driven or fully random truncation does not.
6. Comparison with Frequentist and Other Bayesian Approaches
Analysis reveals that the convergence properties of Gaussian sieve priors often parallel those of frequentist sieve estimators, particularly in nonparametric and semi-parametric models. For example, periodogram-based estimators for long-memory time series achieve the minimax rate , which matches the contraction rate of the sieve prior with deterministic (up to log-factors) (Kruijer et al., 2012).
In contrast to fully Bayesian methods that randomize with heavy-tailed priors (facilitating automatic adaptation over the entire parameter space), deterministic or empirically tuned sieves can deliver sharper convergence rates for specific functionals. This reflects an inherent trade-off: full Bayesian adaptation excels for function estimation in global metrics, but sacrifices efficiency for certain semi-parametric objectives.
7. Summary Table: Sieve Prior Contraction Rates
| Problem/Class | Sieve Prior Type | Achieved Rate (up to logs) | Minimax/Optimal Rate |
|---|---|---|---|
| Global (Sobolev, white noise) | Poisson , | ||
| Pointwise (Sobolev, white noise) | Same | ||
| Long-memory , FEXP (semi-parametric) | (deterministic) | ||
| Long-memory , FEXP | Poisson/geometric |
The contraction properties confirm that Gaussian sieve priors are robust tools for adaptive nonparametric Bayes inference in high-dimensional and infinite-dimensional settings, but their performance must be evaluated in light of the specific estimation criterion of interest.
References: (Arbel et al., 2012, Kruijer et al., 2012).
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free