Diffusion Posterior Sampling is Computationally Intractable (2402.12727v1)
Abstract: Diffusion models are a remarkably effective way of learning and sampling from a distribution $p(x)$. In posterior sampling, one is also given a measurement model $p(y \mid x)$ and a measurement $y$, and would like to sample from $p(x \mid y)$. Posterior sampling is useful for tasks such as inpainting, super-resolution, and MRI reconstruction, so a number of recent works have given algorithms to heuristically approximate it; but none are known to converge to the correct distribution in polynomial time. In this paper we show that posterior sampling is \emph{computationally intractable}: under the most basic assumption in cryptography -- that one-way functions exist -- there are instances for which \emph{every} algorithm takes superpolynomial time, even though \emph{unconditional} sampling is provably fast. We also show that the exponential-time rejection sampling algorithm is essentially optimal under the stronger plausible assumption that there are one-way functions that take exponential time to invert.
- Cryptography in NC0𝑁superscript𝐶0NC^{0}italic_N italic_C start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT. In 45th Annual IEEE Symposium on Foundations of Computer Science, pages 166–175, 2004.
- Nearly d𝑑ditalic_d-linear convergence bounds for diffusion models via stochastic localization, 2024.
- Stable video diffusion: Scaling latent video diffusion models to large datasets, 2023.
- Generative modeling with denoising auto-encoders and langevin sampling. ArXiv, abs/2002.00107, 2020.
- Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. In The Eleventh International Conference on Learning Representations, 2023.
- Diffusion posterior sampling for general noisy inverse problems. In The Eleventh International Conference on Learning Representations, 2023.
- Wavegrad: Estimating gradients for waveform generation. In International Conference on Learning Representations, 2021.
- Diffusion models beat gans on image synthesis. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 8780–8794. Curran Associates, Inc., 2021.
- Diffusion posterior sampling for linear inverse problem solving: A filtering perspective. In The Twelfth International Conference on Learning Representations, 2024.
- Finite-sample maximum likelihood estimation of location. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 30139–30149. Curran Associates, Inc., 2022.
- Sample-efficient training for diffusion, 2023.
- Denoising diffusion probabilistic models. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 6840–6851. Curran Associates, Inc., 2020.
- Video diffusion models, 2022.
- Robust compressed sensing mri with deep generative priors. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 14938–14954. Curran Associates, Inc., 2021.
- Fairness for image generation with uncertain sensitive attributes. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 4721–4732. PMLR, 18–24 Jul 2021.
- Denoising diffusion restoration models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022.
- Diffwave: A versatile diffusion model for audio synthesis. In International Conference on Learning Representations, 2021.
- SNIPS: Solving noisy inverse problems stochastically. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
- Repaint: Inpainting using denoising diffusion probabilistic models. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11451–11461, 2022.
- B. Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model selection. Annals of Statistics, 28, 10 2000.
- Foundations of machine learning. 2018.
- High-resolution image synthesis with latent diffusion models. CoRR, abs/2112.10752, 2021.
- Hierarchical text-conditional image generation with clip latents, 2022.
- Deep unsupervised learning using nonequilibrium thermodynamics. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2256–2265, Lille, France, 07–09 Jul 2015. PMLR.
- Generative modeling by estimating gradients of the data distribution. In Neural Information Processing Systems, 2019.
- Denoising diffusion implicit models. In International Conference on Learning Representations, 2021.
- Pseudoinverse-guided diffusion models for inverse problems. In International Conference on Learning Representations, 2023.
- Practical and asymptotically exact conditional sampling in diffusion models. In ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling, 2023.
- Diffusion probabilistic modeling of protein backbones in 3d for the motif-scaffolding problem. In The Eleventh International Conference on Learning Representations, 2023.