Detecting Conflicts in Evidence Synthesis Models Using Score Discrepancies
Abstract: Evidence synthesis models combine multiple data sources to estimate latent quantities of interest, enabling reliable inference on parameters that are difficult to measure directly. However, shared parameters across data sources can induce conflicts both among the data and with the assumed model structure. Detecting and quantifying such conflicts remains a challenge in model criticism. Here we propose a general framework for conflict detection in evidence synthesis models based on score discrepancies, extending prior-data conflict diagnostics to more general conflict checks in the latent space of hierarchical models. Simulation studies in an exchangeable model demonstrate that the proposed approach effectively detects between-data inconsistencies. Application to an influenza severity model illustrates its use, complementary to traditional deviance-based diagnostics, in complex real-world hierarchical settings. The proposed framework thus provides a flexible and broadly applicable tool for consistency assessment in Bayesian evidence synthesis.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.