Papers
Topics
Authors
Recent
2000 character limit reached

Uncertainty Visualization via Low-Dimensional Posterior Projections (2312.07804v2)

Published 12 Dec 2023 in cs.CV

Abstract: In ill-posed inverse problems, it is commonly desirable to obtain insight into the full spectrum of plausible solutions, rather than extracting only a single reconstruction. Information about the plausible solutions and their likelihoods is encoded in the posterior distribution. However, for high-dimensional data, this distribution is challenging to visualize. In this work, we introduce a new approach for estimating and visualizing posteriors by employing energy-based models (EBMs) over low-dimensional subspaces. Specifically, we train a conditional EBM that receives an input measurement and a set of directions that span some low-dimensional subspace of solutions, and outputs the probability density function of the posterior within that space. We demonstrate the effectiveness of our method across a diverse range of datasets and image restoration problems, showcasing its strength in uncertainty quantification and visualization. As we show, our method outperforms a baseline that projects samples from a diffusion-based posterior sampler, while being orders of magnitude faster. Furthermore, it is more accurate than a baseline that assumes a Gaussian posterior.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (76)
  1. Deep evidential regression. Advances in Neural Information Processing Systems, 33:14927–14937, 2020.
  2. Image-to-image regression with distribution-free uncertainty quantification and applications in imaging. In International Conference on Machine Learning, 2022.
  3. Distribution-free, risk-controlling prediction sets. Journal of the ACM (JACM), 68(6):1–34, 2021.
  4. Principal uncertainty quantification with spatial correlation for image restoration problems. arXiv preprint arXiv:2305.10124, 2023.
  5. A regularized conditional gan for posterior sampling in inverse problems. arXiv preprint arXiv:2210.13389, 2022.
  6. Justifying and generalizing contrastive divergence. Neural computation, 21(6):1601–1621, 2009.
  7. Weight uncertainty in neural network. In International conference on machine learning, pages 1613–1622. PMLR, 2015.
  8. On contrastive divergence learning. In International workshop on artificial intelligence and statistics, pages 33–40. PMLR, 2005.
  9. In silico labeling: predicting fluorescent labels in unlabeled images. Cell, 173(3):792–803, 2018.
  10. Diffusion posterior sampling for general noisy inverse problems. In The Eleventh International Conference on Learning Representations, 2022.
  11. Fast diffusion sampler for inverse problems by geometric decomposition. arXiv preprint arXiv:2303.05754, 2023.
  12. From posterior sampling to meaningful diversity in image restoration. arXiv preprint arXiv:2310.16047, 2023.
  13. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
  14. Structured uncertainty prediction networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5477–5485, 2018.
  15. Improved contrastive divergence training of energy based models. In Energy Based Models Workshop-ICLR 2021, 2021.
  16. U-net: deep learning for cell counting, detection, and morphometry. Nature methods, 16(1):67–70, 2019.
  17. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059. PMLR, 2016.
  18. Learning energy-based models by diffusion recovery likelihood. In International Conference on Learning Representations, 2020.
  19. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 297–304. JMLR Workshop and Conference Proceedings, 2010.
  20. W Keith Hastings. Monte carlo sampling methods using markov chains and their applications. 1970.
  21. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  22. Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800, 2002.
  23. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  24. Conffusion: Confidence intervals for diffusion models. arXiv preprint arXiv:2211.09795, 2022.
  25. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision, pages 1501–1510, 2017.
  26. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005.
  27. Subspace inference for bayesian deep learning. In Uncertainty in Artificial Intelligence, pages 1169–1179. PMLR, 2020.
  28. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
  29. Snips: Solving noisy inverse problems stochastically. Advances in Neural Information Processing Systems, 34:21757–21769, 2021.
  30. Denoising diffusion restoration models. In Advances in Neural Information Processing Systems, 2022.
  31. What uncertainties do we need in bayesian deep learning for computer vision? Advances in neural information processing systems, 30, 2017.
  32. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017.
  33. Mat: Mask-aware transformer for large hole image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10758–10768, 2022.
  34. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 136–144, 2017.
  35. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pages 3730–3738, 2015a.
  36. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015b.
  37. Multiplicative normalizing flows for variational bayesian neural networks. In International Conference on Machine Learning, pages 2218–2227. PMLR, 2017.
  38. Srflow: Learning the super-resolution space with normalizing flow. In ECCV, 2020.
  39. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11461–11471, 2022.
  40. Knowledge distillation in iterative generative models for improved sampling speed. arXiv preprint arXiv:2101.02388, 2021.
  41. Predictive uncertainty estimation via prior networks. Advances in neural information processing systems, 31, 2018.
  42. On the posterior distribution in denoising: Application to uncertainty quantification. arXiv preprint arXiv:2309.13598, 2023.
  43. Mode seeking generative adversarial networks for diverse image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1429–1437, 2019.
  44. Estimating high order gradients of the data distribution by denoising. Advances in Neural Information Processing Systems, 34:25359–25369, 2021.
  45. On distillation of guided diffusion models. In NeurIPS 2022 Workshop on Score-Based Methods, 2022.
  46. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087–1092, 1953.
  47. Stochastic segmentation networks: Modelling spatially correlated aleatoric uncertainty. Advances in neural information processing systems, 33:12756–12767, 2020.
  48. Radford M Neal. Bayesian learning for neural networks. Springer Science & Business Media, 2012.
  49. Uncertainty quantification via neural posterior principal components. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  50. Structuring uncertainty for fine-grained sampling in stochastic segmentation networks. Advances in Neural Information Processing Systems, 35:27678–27691, 2022.
  51. High perceptual quality image denoising with a posterior sampling cgan. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1805–1813, 2021.
  52. Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nature methods, 15(11):917–920, 2018.
  53. High-quality prediction intervals for deep learning: A distribution-free, ensembled approach. In International conference on machine learning, pages 4075–4084. PMLR, 2018.
  54. A scalable laplace approximation for neural networks. In 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings. International Conference on Representation Learning, 2018.
  55. Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nature biomedical engineering, 3(6):466–477, 2019.
  56. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
  57. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211–252, 2015.
  58. Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1–10, 2022a.
  59. Image super-resolution via iterative refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022b.
  60. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512, 2022.
  61. Markov chain monte carlo and variational inference: Bridging the gap. In International conference on machine learning, pages 1218–1226. PMLR, 2015.
  62. Semantic uncertainty intervals for disentangled latent spaces. In Advances in Neural Information Processing Systems, 2022.
  63. Generating high fidelity data from low-density regions using diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11492–11501, 2022.
  64. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pages 2256–2265. PMLR, 2015.
  65. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
  66. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020.
  67. Maximum likelihood training of score-based diffusion models. Advances in Neural Information Processing Systems, 34:1415–1428, 2021a.
  68. Solving inverse problems in medical imaging with score-based generative models. In International Conference on Learning Representations, 2021b.
  69. Consistency models. arXiv preprint arXiv:2303.01469, 2023.
  70. On the convergence properties of contrastive divergence. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 789–795. JMLR Workshop and Conference Proceedings, 2010.
  71. Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks. Neurocomputing, 338:34–45, 2019.
  72. Zero-shot image restoration using denoising diffusion null-space model. The Eleventh International Conference on Learning Representations, 2023.
  73. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 681–688, 2011.
  74. Thinking fourth dimensionally: Treating time as a random variable in ebms. https://openreview.net/forum?id=m0fEJ2bvwpw, 2022.
  75. Inclusive gan: Improving data and minority coverage in generative models. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII 16, pages 377–393. Springer, 2020.
  76. Learning energy-based generative models via coarse-to-fine expanding and sampling. In International Conference on Learning Representations, 2020.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Video Overview

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.