Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Penalty parameter selection and asymmetry corrections to Laplace approximations in Bayesian P-splines models (2210.01668v1)

Published 4 Oct 2022 in stat.ME

Abstract: Laplacian-P-splines (LPS) associate the P-splines smoother and the Laplace approximation in a unifying framework for fast and flexible inference under the Bayesian paradigm. Gaussian Markov field priors imposed on penalized latent variables and the Bernstein-von Mises theorem typically ensure a razor-sharp accuracy of the Laplace approximation to the posterior distribution of these variables. This accuracy can be seriously compromised for some unpenalized parameters, especially when the information synthesized by the prior and the likelihood is sparse. We propose a refined version of the LPS methodology by splitting the latent space in two subsets. The first set involves latent variables for which the joint posterior distribution is approached from a non-Gaussian perspective with an approximation scheme that is particularly well tailored to capture asymmetric patterns, while the posterior distribution for parameters in the complementary latent set undergoes a traditional treatment with Laplace approximations. As such, the dichotomization of the latent space provides the necessary structure for a separate treatment of model parameters, yielding improved estimation accuracy as compared to a setting where posterior quantities are uniformly handled with Laplace. In addition, the proposed enriched version of LPS remains entirely sampling-free, so that it operates at a computing speed that is far from reach to any existing Markov chain Monte Carlo approach. The methodology is illustrated on the additive proportional odds model with an application on ordinal survey data.

Summary

We haven't generated a summary for this paper yet.