Uncertainty Visualization via Low-Dimensional Posterior Projections (2312.07804v2)
Abstract: In ill-posed inverse problems, it is commonly desirable to obtain insight into the full spectrum of plausible solutions, rather than extracting only a single reconstruction. Information about the plausible solutions and their likelihoods is encoded in the posterior distribution. However, for high-dimensional data, this distribution is challenging to visualize. In this work, we introduce a new approach for estimating and visualizing posteriors by employing energy-based models (EBMs) over low-dimensional subspaces. Specifically, we train a conditional EBM that receives an input measurement and a set of directions that span some low-dimensional subspace of solutions, and outputs the probability density function of the posterior within that space. We demonstrate the effectiveness of our method across a diverse range of datasets and image restoration problems, showcasing its strength in uncertainty quantification and visualization. As we show, our method outperforms a baseline that projects samples from a diffusion-based posterior sampler, while being orders of magnitude faster. Furthermore, it is more accurate than a baseline that assumes a Gaussian posterior.
- Deep evidential regression. Advances in Neural Information Processing Systems, 33:14927–14937, 2020.
- Image-to-image regression with distribution-free uncertainty quantification and applications in imaging. In International Conference on Machine Learning, 2022.
- Distribution-free, risk-controlling prediction sets. Journal of the ACM (JACM), 68(6):1–34, 2021.
- Principal uncertainty quantification with spatial correlation for image restoration problems. arXiv preprint arXiv:2305.10124, 2023.
- A regularized conditional gan for posterior sampling in inverse problems. arXiv preprint arXiv:2210.13389, 2022.
- Justifying and generalizing contrastive divergence. Neural computation, 21(6):1601–1621, 2009.
- Weight uncertainty in neural network. In International conference on machine learning, pages 1613–1622. PMLR, 2015.
- On contrastive divergence learning. In International workshop on artificial intelligence and statistics, pages 33–40. PMLR, 2005.
- In silico labeling: predicting fluorescent labels in unlabeled images. Cell, 173(3):792–803, 2018.
- Diffusion posterior sampling for general noisy inverse problems. In The Eleventh International Conference on Learning Representations, 2022.
- Fast diffusion sampler for inverse problems by geometric decomposition. arXiv preprint arXiv:2303.05754, 2023.
- From posterior sampling to meaningful diversity in image restoration. arXiv preprint arXiv:2310.16047, 2023.
- Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
- Structured uncertainty prediction networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5477–5485, 2018.
- Improved contrastive divergence training of energy based models. In Energy Based Models Workshop-ICLR 2021, 2021.
- U-net: deep learning for cell counting, detection, and morphometry. Nature methods, 16(1):67–70, 2019.
- Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059. PMLR, 2016.
- Learning energy-based models by diffusion recovery likelihood. In International Conference on Learning Representations, 2020.
- Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 297–304. JMLR Workshop and Conference Proceedings, 2010.
- WÂ Keith Hastings. Monte carlo sampling methods using markov chains and their applications. 1970.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800, 2002.
- Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
- Conffusion: Confidence intervals for diffusion models. arXiv preprint arXiv:2211.09795, 2022.
- Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision, pages 1501–1510, 2017.
- Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005.
- Subspace inference for bayesian deep learning. In Uncertainty in Artificial Intelligence, pages 1169–1179. PMLR, 2020.
- Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
- Snips: Solving noisy inverse problems stochastically. Advances in Neural Information Processing Systems, 34:21757–21769, 2021.
- Denoising diffusion restoration models. In Advances in Neural Information Processing Systems, 2022.
- What uncertainties do we need in bayesian deep learning for computer vision? Advances in neural information processing systems, 30, 2017.
- Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017.
- Mat: Mask-aware transformer for large hole image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10758–10768, 2022.
- Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 136–144, 2017.
- Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pages 3730–3738, 2015a.
- Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015b.
- Multiplicative normalizing flows for variational bayesian neural networks. In International Conference on Machine Learning, pages 2218–2227. PMLR, 2017.
- Srflow: Learning the super-resolution space with normalizing flow. In ECCV, 2020.
- Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11461–11471, 2022.
- Knowledge distillation in iterative generative models for improved sampling speed. arXiv preprint arXiv:2101.02388, 2021.
- Predictive uncertainty estimation via prior networks. Advances in neural information processing systems, 31, 2018.
- On the posterior distribution in denoising: Application to uncertainty quantification. arXiv preprint arXiv:2309.13598, 2023.
- Mode seeking generative adversarial networks for diverse image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1429–1437, 2019.
- Estimating high order gradients of the data distribution by denoising. Advances in Neural Information Processing Systems, 34:25359–25369, 2021.
- On distillation of guided diffusion models. In NeurIPS 2022 Workshop on Score-Based Methods, 2022.
- Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087–1092, 1953.
- Stochastic segmentation networks: Modelling spatially correlated aleatoric uncertainty. Advances in neural information processing systems, 33:12756–12767, 2020.
- Radford M Neal. Bayesian learning for neural networks. Springer Science & Business Media, 2012.
- Uncertainty quantification via neural posterior principal components. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
- Structuring uncertainty for fine-grained sampling in stochastic segmentation networks. Advances in Neural Information Processing Systems, 35:27678–27691, 2022.
- High perceptual quality image denoising with a posterior sampling cgan. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1805–1813, 2021.
- Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nature methods, 15(11):917–920, 2018.
- High-quality prediction intervals for deep learning: A distribution-free, ensembled approach. In International conference on machine learning, pages 4075–4084. PMLR, 2018.
- A scalable laplace approximation for neural networks. In 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings. International Conference on Representation Learning, 2018.
- Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nature biomedical engineering, 3(6):466–477, 2019.
- U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
- Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211–252, 2015.
- Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1–10, 2022a.
- Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022b.
- Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512, 2022.
- Markov chain monte carlo and variational inference: Bridging the gap. In International conference on machine learning, pages 1218–1226. PMLR, 2015.
- Semantic uncertainty intervals for disentangled latent spaces. In Advances in Neural Information Processing Systems, 2022.
- Generating high fidelity data from low-density regions using diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11492–11501, 2022.
- Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pages 2256–2265. PMLR, 2015.
- Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
- Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020.
- Maximum likelihood training of score-based diffusion models. Advances in Neural Information Processing Systems, 34:1415–1428, 2021a.
- Solving inverse problems in medical imaging with score-based generative models. In International Conference on Learning Representations, 2021b.
- Consistency models. arXiv preprint arXiv:2303.01469, 2023.
- On the convergence properties of contrastive divergence. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 789–795. JMLR Workshop and Conference Proceedings, 2010.
- Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks. Neurocomputing, 338:34–45, 2019.
- Zero-shot image restoration using denoising diffusion null-space model. The Eleventh International Conference on Learning Representations, 2023.
- Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 681–688, 2011.
- Thinking fourth dimensionally: Treating time as a random variable in ebms. https://openreview.net/forum?id=m0fEJ2bvwpw, 2022.
- Inclusive gan: Improving data and minority coverage in generative models. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII 16, pages 377–393. Springer, 2020.
- Learning energy-based generative models via coarse-to-fine expanding and sampling. In International Conference on Learning Representations, 2020.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.