Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 164 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 76 tok/s Pro
Kimi K2 216 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Greater Than the Sum of its Parts: Computationally Flexible Bayesian Hierarchical Modeling (2010.12568v2)

Published 23 Oct 2020 in stat.ME and stat.CO

Abstract: We propose a multistage method for making inference at all levels of a Bayesian hierarchical model (BHM) using natural data partitions to increase efficiency by allowing computations to take place in parallel form using software that is most appropriate for each data partition. The full hierarchical model is then approximated by the product of independent normal distributions for the data component of the model. In the second stage, the Bayesian maximum {\it a posteriori} (MAP) estimator is found by maximizing the approximated posterior density with respect to the parameters. If the parameters of the model can be represented as normally distributed random effects then the second stage optimization is equivalent to fitting a multivariate normal linear mixed model. This method can be extended to account for common fixed parameters shared between data partitions, as well as parameters that are distinct between partitions. In the case of distinct parameter estimation, we consider a third stage that re-estimates the distinct parameters for each data partition based on the results of the second stage. This allows more information from the entire data set to properly inform the posterior distributions of the distinct parameters. The method is demonstrated with two ecological data sets and models, a random effects GLM and an Integrated Population Model (IPM). The multistage results were compared to estimates from models fit in single stages to the entire data set. Both examples demonstrate that multistage point and posterior standard deviation estimates closely approximate those obtained from fitting the models with all data simultaneously and can therefore be considered for fitting hierarchical Bayesian models when it is computationally prohibitive to do so in one step.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.