Papers
Topics
Authors
Recent
2000 character limit reached

CoLT: The conditional localization test for assessing the accuracy of neural posterior estimates (2507.17030v1)

Published 22 Jul 2025 in stat.ML and cs.LG

Abstract: We consider the problem of validating whether a neural posterior estimate ( q(\theta \mid x) ) is an accurate approximation to the true, unknown true posterior ( p(\theta \mid x) ). Existing methods for evaluating the quality of an NPE estimate are largely derived from classifier-based tests or divergence measures, but these suffer from several practical drawbacks. As an alternative, we introduce the \emph{Conditional Localization Test} (CoLT), a principled method designed to detect discrepancies between ( p(\theta \mid x) ) and ( q(\theta \mid x) ) across the full range of conditioning inputs. Rather than relying on exhaustive comparisons or density estimation at every ( x ), CoLT learns a localization function that adaptively selects points $\theta_l(x)$ where the neural posterior $q$ deviates most strongly from the true posterior $p$ for that $x$. This approach is particularly advantageous in typical simulation-based inference settings, where only a single draw ( \theta \sim p(\theta \mid x) ) from the true posterior is observed for each conditioning input, but where the neural posterior ( q(\theta \mid x) ) can be sampled an arbitrary number of times. Our theoretical results establish necessary and sufficient conditions for assessing distributional equality across all ( x ), offering both rigorous guarantees and practical scalability. Empirically, we demonstrate that CoLT not only performs better than existing methods at comparing $p$ and $q$, but also pinpoints regions of significant divergence, providing actionable insights for model refinement. These properties position CoLT as a state-of-the-art solution for validating neural posterior estimates.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Video Overview

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 3 tweets with 5 likes about this paper.