Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Invertible generative models for inverse problems: mitigating representation error and dataset bias (1905.11672v4)

Published 28 May 2019 in cs.CV

Abstract: Trained generative models have shown remarkable performance as priors for inverse problems in imaging -- for example, Generative Adversarial Network priors permit recovery of test images from 5-10x fewer measurements than sparsity priors. Unfortunately, these models may be unable to represent any particular image because of architectural choices, mode collapse, and bias in the training dataset. In this paper, we demonstrate that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive sensing, and inpainting. Given a trained generative model, we study the empirical risk formulation of the desired inverse problem under a regularization that promotes high likelihood images, either directly by penalization or algorithmically by initialization. For compressive sensing, invertible priors can yield higher accuracy than sparsity priors across almost all undersampling ratios, and due to their lack of representation error, invertible priors can yield better reconstructions than GAN priors for images that have rare features of variation within the biased training set, including out-of-distribution natural images. We additionally compare performance for compressive sensing to unlearned methods, such as the deep decoder, and we establish theoretical bounds on expected recovery error in the case of a linear invertible model.

Citations (143)

Summary

  • The paper demonstrates that invertible generative models using the Glow architecture achieve precise latent variable inference and zero representation error, enabling robust image reconstruction.
  • It shows superior performance in image denoising and compressive sensing with higher PSNR compared to traditional methods, even for out-of-distribution images.
  • The study derives theoretical bounds for recovery error linked to the smallest singular values, offering actionable insights for advancing imaging inverse problem solutions.

Insights on Invertible Generative Models for Imaging Inverse Problems

The paper "Invertible Generative Models for Inverse Problems: Mitigating Representation Error and Dataset Bias" proposes the use of invertible neural networks as priors for solving inverse problems in imaging tasks such as denoising, compressive sensing, and inpainting. As traditional generative models like GANs face challenges such as representation errors and biases from the training datasets, the paper posits that invertible models, by design, can potentially overcome these shortcomings due to their zero representation error and high latent expressivity.

The paper emphasizes the unique architecture of invertible neural networks, particularly leveraging the Glow architecture, which facilitates exact latent-variable inference and efficient image synthesis. The capabilities of invertible networks to handle out-of-distribution images without needing explicit low-dimensional constraints highlights the utility of these models across diverse and challenging datasets.

Key contributions and results from the paper are:

  • Image Denoising: Using CelebA images, the authors demonstrated that Glow-based invertible priors yield sharper image reconstructions with higher Peak Signal-to-Noise Ratios (PSNR) than traditional methods such as BM3D and trained DCGANs, which suffer from dataset bias.
  • Compressive Sensing: The paper reveals that invertible priors can achieve higher PSNRs relative to GANs and unlearned methods like Deep Decoders across a broad range of undersampling ratios. Impressively, Glow models show better performance decay characteristics on out-of-distribution datasets, illustrating their robustness to distribution shifts.
  • Theoretical Bounds: A unique contribution involves deriving theoretical bounds for expected recovery error using a linear invertible model representation, indicating the expected error is related to the smallest singular values of the model. Such theoretical insights guide understanding of error characteristics in practical scenarios.

The implications of these findings are substantial. The invertible architectures provide a method for mitigating biases inherent in sample-constrained conventional generative models. This advancement is crucial for applications in medical imaging or scientific fields where encountering novel or anomalous image features is common.

Moving forward, the exploration into hybrid models that integrate the strengths of invertible networks and traditional low-dimensional generative methods might yield even more powerful tools. Further research could focus on optimizing these architectures to better balance computational efficiency with their enhanced representational capabilities.

Overall, the paper asserts the transformative potential of invertible generative models in advancing the field of imaging inverse problems, particularly in situations where maintaining image integrity across varying input distributions is critical.

Youtube Logo Streamline Icon: https://streamlinehq.com