Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

Bayesian Compressed Mixed-Effects Models (2507.16961v1)

Published 22 Jul 2025 in stat.ME and stat.CO

Abstract: Penalized likelihood and quasi-likelihood methods dominate inference in high-dimensional linear mixed-effects models. Sampling-based Bayesian inference is less explored due to the computational bottlenecks introduced by the random effects covariance matrix. To address this gap, we propose the compressed mixed-effects (CME) model, which defines a quasi-likelihood using low-dimensional covariance parameters obtained via random projections of the random effects covariance. This dimension reduction, combined with a global-local shrinkage prior on the fixed effects, yields an efficient collapsed Gibbs sampler for prediction and fixed effects selection. Theoretically, when the compression dimension grows slowly relative to the number of fixed effects and observations, the Bayes risk for prediction is asymptotically negligible, ensuring accurate prediction using the CME model. Empirically, the CME model outperforms existing approaches in terms of predictive accuracy, interval coverage, and fixed-effects selection across varied simulation settings and a real-world dataset.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.