Statistical Inference for Generative Model Comparison (2501.18897v2)
Abstract: Generative models have recently achieved remarkable empirical performance in various applications, however, their evaluations yet lack uncertainty quantification. In this paper, we propose a method to compare two generative models with statistical confidence based on an unbiased estimator of their relative performance gap. Theoretically, our estimator achieves parametric convergence rates and admits asymptotic normality, which enables valid inference. Empirically, on simulated datasets, our approach effectively controls type I error without compromising its power. In addition, on real image and language datasets, we demonstrate our method's performance in comparing generative models with statistical guarantees.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.