Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Detection of Distribution Shift in Inverse Problems using Diffusion Models (2505.11482v3)

Published 16 May 2025 in cs.CV

Abstract: Diffusion models are widely used as priors in imaging inverse problems. However, their performance often degrades under distribution shifts between the training and test-time images. Existing methods for identifying and quantifying distribution shifts typically require access to clean test images, which are almost never available while solving inverse problems (at test time). We propose a fully unsupervised metric for estimating distribution shifts using only indirect (corrupted) measurements and score functions from diffusion models trained on different datasets. We theoretically show that this metric estimates the KL divergence between the training and test image distributions. Empirically, we show that our score-based metric, using only corrupted measurements, closely approximates the KL divergence computed from clean images. Motivated by this result, we show that aligning the out-of-distribution score with the in-distribution score -- using only corrupted measurements -- reduces the KL divergence and leads to improved reconstruction quality across multiple inverse problems.

Summary

Unsupervised Detection of Distribution Shift in Inverse Problems using Diffusion Models

The paper "Unsupervised Detection of Distribution Shift in Inverse Problems using Diffusion Models" presents a significant contribution to the field of inverse problems with applications in imaging, by addressing the issues of distribution shift that adversely affect the generalization of diffusion models. These models assume similar distributions for training and test datasets, an assumption frequently invalidated in real-world situations, particularly in healthcare and other domains with complex data distributions. The authors propose a novel, fully unsupervised metric for estimating distribution shifts using diffused image priors obtained solely from corrupted measurements, a scenario prevalent in inverse problems where clean images are inaccessible.

Core Contributions and Results

The work introduces a score-based metric that estimates the Kullback–Leibler (KL) divergence between training and test datasets using diffusion models. This metric is derived theoretically to approximate the KL divergence directly from corrupted images rather than requiring access to clean test images, thus making the solution practical for real-world inverse problems like MRI reconstruction and image inpainting. Empirically, it is demonstrated that the proposed metric closely aligns with the KL divergence calculated using clean images across various datasets and corruption levels.

Noteworthy in its methodological advancement is the establishment of a closed-form metric connecting the KL divergence to score discrepancies between in-distribution (InD) and out-of-distribution (OOD) scores, evaluated on corrupted measurements. This theoretical framework provides insights into why adapting score functions on partial measurements can improve model generalization by reducing distribution shift.

Practical and Theoretical Implications

Practical Implications: The proposed metric provides a reliable diagnostic tool for practitioners to identify distribution shifts in deployed models without necessitating clean ground-truth data, which is often unavailable or expensive to obtain. Additionally, the adaptation method informed by the metric offers a feasible strategy to enhance reconstruction performance in inverse problems, making it valuable for image restoration tasks in medical imaging where accuracy and robustness are critical.

Theoretical Implications: The results extend the understanding of diffusion models applied within inverse problems by offering a new perspective on distribution shift quantification. The linkage of distribution shift estimation with score function mismatches at the measurement level furthers the theoretical basis for unsupervised model adaptation.

Directions for Future Research

The promising results invite several forward-looking research directions, including exploring extensions to the metric under varying types of measurement models beyond those studied, possibly incorporating additional layers of complexity such as anisotropic noise distributions or non-linear measurement operators. Further, the integration of the metric with other learning paradigms such as domain adaptation or continuous learning could potentially enhance model resilience to shifts in distributions over time.

Overall, this paper offers an impactful method for handling distribution shifts in inverse problems, with strong theoretical backing and empirical validation, laying the groundwork for more robust and adaptable imaging solutions in sensitive application areas such as healthcare.