Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What's in a Prior? Learned Proximal Networks for Inverse Problems (2310.14344v2)

Published 22 Oct 2023 in cs.CV and cs.LG

Abstract: Proximal operators are ubiquitous in inverse problems, commonly appearing as part of algorithmic strategies to regularize problems that are otherwise ill-posed. Modern deep learning models have been brought to bear for these tasks too, as in the framework of plug-and-play or deep unrolling, where they loosely resemble proximal operators. Yet, something essential is lost in employing these purely data-driven approaches: there is no guarantee that a general deep network represents the proximal operator of any function, nor is there any characterization of the function for which the network might provide some approximate proximal. This not only makes guaranteeing convergence of iterative schemes challenging but, more fundamentally, complicates the analysis of what has been learned by these networks about their training data. Herein we provide a framework to develop learned proximal networks (LPN), prove that they provide exact proximal operators for a data-driven nonconvex regularizer, and show how a new training strategy, dubbed proximal matching, provably promotes the recovery of the log-prior of the true data distribution. Such LPN provide general, unsupervised, expressive proximal operators that can be used for general inverse problems with convergence guarantees. We illustrate our results in a series of cases of increasing complexity, demonstrating that these models not only result in state-of-the-art performance, but provide a window into the resulting priors learned from data.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (128)
  1. A shrinkage learning approach for single image super-resolution with overcomplete representations. In Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part II 11, pp.  622–635. Springer, 2010.
  2. Learned primal-dual reconstruction. IEEE transactions on medical imaging, 37(6):1322–1332, 2018.
  3. Modl: Model-based deep learning architecture for inverse problems. IEEE transactions on medical imaging, 38(2):394–405, 2018.
  4. Input convex neural networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp.  146–155. PMLR, 2017. URL https://proceedings.mlr.press/v70/amos17b.html.
  5. Image-to-image regression with distribution-free uncertainty quantification and applications in imaging. In International Conference on Machine Learning, pp.  717–730. PMLR, 2022.
  6. Solving inverse problems using data-driven models. Acta Numerica, 28:1–174, 2019.
  7. Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the Kurdyka-ŁOjasiewicz inequality. Mathematics of Operations Research, 35(2):438–457, May 2010. ISSN 0364-765X. doi: 10.1287/moor.1100.0449. URL http://dx.doi.org/10.1287/moor.1100.0449.
  8. Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Mathematical Programming. A Publication of the Mathematical Programming Society, 137(1):91–129, February 2013. ISSN 0025-5610, 1436-4646. doi: 10.1007/s10107-011-0484-9. URL https://doi.org/10.1007/s10107-011-0484-9.
  9. Scientific computational imaging code (scico). Journal of Open Source Software, 7(LA-UR-22-28555), 2022.
  10. Amir Beck. First-order methods in optimization. SIAM, 2017.
  11. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1):183–202, 2009.
  12. Modern regularization methods for inverse problems. Acta numerica, 27:1–111, 2018.
  13. Introduction to inverse problems in imaging. CRC press, 2021.
  14. Dimitri P Bertsekas. Nonlinear Programming. Athena Scientific, 2016. ISBN 9781886529052. URL https://market.android.com/details?id=book-TwOujgEACAAJ.
  15. An inertial forward–backward algorithm for the minimization of the sum of two nonconvex functions. EURO Journal on Computational Optimization, 4(1):3–25, February 2016. ISSN 2192-4414. doi: 10.1007/s13675-015-0045-8. URL https://doi.org/10.1007/s13675-015-0045-8.
  16. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning, 3(1):1–122, 2011.
  17. Convex optimization. Cambridge university press, 2004.
  18. Total generalized variation. SIAM Journal on Imaging Sciences, 3(3):492–526, 2010.
  19. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM review, 51(1):34–81, 2009.
  20. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of mathematical imaging and vision, 40:120–145, 2011.
  21. Equivariant imaging: Learning beyond the range space. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  4379–4388, 2021.
  22. Robust equivariant imaging: a fully unsupervised framework for learning to image from noisy and partial measurements. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  5647–5656, 2022a.
  23. Imaging with equivariant deep learning: From unrolled network design to fully unsupervised learning. IEEE Signal Processing Magazine, 40(1):134–147, 2023a.
  24. Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions. In International Conference on Machine Learning, pp.  4735–4763. PMLR, 2023b.
  25. Learning to optimize: A primer and a benchmark. The Journal of Machine Learning Research, 23(1):8562–8620, 2022b.
  26. Diffusion posterior sampling for general noisy inverse problems. arXiv preprint arXiv:2209.14687, 2022.
  27. It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Advances in Neural Information Processing Systems, 34:18152–18164, 2021a.
  28. Regularization by denoising via Fixed-Point projection (RED-PRO). SIAM journal on imaging sciences, 14(3):1374–1406, January 2021b. doi: 10.1137/20M1337168. URL https://doi.org/10.1137/20M1337168.
  29. A douglas–rachford splitting approach to nonsmooth convex variational signal recovery. IEEE Journal of Selected Topics in Signal Processing, 1(4):564–574, 2007.
  30. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing, 16(8):2080–2095, 2007.
  31. Measuring robustness in deep learning based compressive sensing. In International Conference on Machine Learning, pp.  2433–2444. PMLR, 2021.
  32. Inversion by direct iteration: An alternative to denoising diffusion for image restoration. March 2023. URL http://arxiv.org/abs/2303.11435.
  33. Albrecht Dold. Lectures on Algebraic Topology. Springer Science & Business Media, December 2012. ISBN 9783642678219. URL https://play.google.com/store/books/details?id=P-xrCQAAQBAJ.
  34. On the numerical solution of heat conduction problems in two and three space variables. Transactions of the American mathematical Society, 82(2):421–439, 1956.
  35. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15(12):3736–3745, 2006.
  36. Regularization of inverse problems, volume 375. Springer Science & Business Media, 1996.
  37. Deepsti: Towards tensor reconstruction using fewer orientations in susceptibility tensor imaging. Medical image analysis, 87:102829, 2023.
  38. Score-based diffusion models as principled priors for inverse imaging. arXiv preprint arXiv:2304.11751, 2023.
  39. A statistical learning approach to modal regression. Journal of machine learning research: JMLR, 21(2):1–35, 2020. ISSN 1532-4435, 1533-7928. URL https://jmlr.org/papers/v21/17-068.html.
  40. Image reconstruction without explicit priors. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.  1–5. IEEE, 2023.
  41. Nonlinear image recovery with half-quadratic regularization. IEEE transactions on Image Processing, 4(7):932–946, 1995.
  42. Neumann networks for linear inverse problems in imaging. IEEE Transactions on Computational Imaging, 6:328–343, 2019.
  43. Deep equilibrium architectures for inverse problems in imaging. IEEE Transactions on Computational Imaging, 7:1123–1133, 2021.
  44. Tilmann Gneiting. Making and evaluating point forecasts. Journal of the American Statistical Association, 106(494):746–762, 2011. ISSN 0162-1459. URL http://www.jstor.org/stable/41416407.
  45. Learning weakly convex regularizers for convergent Image-Reconstruction algorithms. August 2023. URL http://arxiv.org/abs/2308.10542.
  46. Learning fast approximations of sparse coding. In Proceedings of the 27th international conference on international conference on machine learning, pp.  399–406, 2010.
  47. Rémi Gribonval. Should penalized least squares regression be interpreted as maximum a posteriori estimation? IEEE transactions on signal processing: a publication of the IEEE Signal Processing Society, 59(5):2405–2410, May 2011. ISSN 1053-587X, 1941-0476. doi: 10.1109/TSP.2011.2107908. URL http://dx.doi.org/10.1109/TSP.2011.2107908.
  48. A characterization of proximity operators. Journal of Mathematical Imaging and Vision, 62(6-7):773–789, 2020.
  49. Convergent regularization in inverse problems and linear plug-and-play denoisers. July 2023. URL http://arxiv.org/abs/2307.09441.
  50. C Heinrich. The mode functional is not elicitable. Biometrika, 101(1):245–251, 2014. ISSN 0006-3444. URL http://www.jstor.org/stable/43305608.
  51. Convex potential flows: Universal probability distributions with optimal transport and convex optimization. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=te7PVH1sPxJ.
  52. Gradient step denoiser for convergent plug-and-play. In International Conference on Learning Representations, 2022a.
  53. Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. In International Conference on Machine Learning, pp.  9483–9505. PMLR, 2022b.
  54. Robust compressed sensing mri with deep generative priors. Advances in Neural Information Processing Systems, 34:14938–14954, 2021a.
  55. Instance-optimal compressed sensing via posterior sampling. arXiv preprint arXiv:2106.11438, 2021b.
  56. Directional convergence and alignment in deep learning. In H Larochelle, M Ranzato, R Hadsell, M F Balcan, and H Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp.  17176–17186. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/c76e4b2fa54f8506719a5c0dc14c2eb9-Paper.pdf.
  57. Stochastic solutions for linear inverse problems using the prior implicit in a denoiser. Adv. Neural Inf. Process. Syst., 34:13242–13254, 2021.
  58. Plug-and-Play methods for integrating physical and learned models in computational imaging: Theory, algorithms, and applications. IEEE Signal Processing Magazine, 40(1):85–97, January 2023a. ISSN 1558-0792. doi: 10.1109/MSP.2022.3199595. URL http://dx.doi.org/10.1109/MSP.2022.3199595.
  59. Plug-and-play methods for integrating physical and learned models in computational imaging: Theory, algorithms, and applications. IEEE Signal Processing Magazine, 40(1):85–97, 2023b.
  60. SNIPS: Solving noisy inverse problems stochastically. May 2021.
  61. Denoising diffusion restoration models. January 2022.
  62. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  63. Variational networks: Connecting variational methods and deep learning. In Pattern Recognition, Lecture Notes in Computer Science, pp.  281–293. Springer, Cham, September 2017. ISBN 9783319667089, 9783319667096. doi: 10.1007/978-3-319-66709-6“˙23. URL https://link.springer.com/chapter/10.1007/978-3-319-66709-6_23.
  64. Total deep variation for linear inverse problems. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp.  7549–7558, 2020.
  65. Learned proximal networks for quantitative susceptibility mapping. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part II 23, pp.  125–135. Springer, 2020.
  66. Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998.
  67. Noise2Noise: Learning image restoration without clean data. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp.  2965–2974. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/lehtinen18a.html.
  68. Douglas–Rachford splitting for nonconvex optimization with application to nonconvex feasibility problems. Mathematical Programming. A Publication of the Mathematical Programming Society, 159(1):371–401, September 2016. ISSN 0025-5610, 1436-4646. doi: 10.1007/s10107-015-0963-5. URL https://doi.org/10.1007/s10107-015-0963-5.
  69. Nett: Solving inverse problems with deep neural networks. Inverse Problems, 36(6):065005, 2020.
  70. Splitting algorithms for the sum of two nonlinear operators. SIAM Journal on Numerical Analysis, 16(6):964–979, 1979.
  71. ALISTA: Analytic weights are as good as learned weights in LISTA. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=B1lnzn0ctQ.
  72. Online deep equilibrium learning for regularization by denoising. Advances in Neural Information Processing Systems, 35:25363–25376, 2022.
  73. Large-scale celebfaces attributes (celeba) dataset. Retrieved August, 15(2018):11, 2018.
  74. Ta Lê Loi. Lecture 1: O-minimal structures. In The Japanese-Australian Workshop on Real and Complex Singularities: JARCS III, volume 43, pp.  19–31. Australian National University, Mathematical Sciences Institute, January 2010. URL https://projecteuclid.org/ebooks/proceedings-of-the-centre-for-mathematics-and-its-applications/The-Japanese-Australian-Workshop-on-Real-and-Complex-Singularities/chapter/Lecture-1-O-minimal-Structures/pcma/1416320994.
  75. Stanislaw Lojasiewicz. Une propriété topologique des sous-ensembles analytiques réels. Les équations aux dérivées partielles, 117:87–89, 1963.
  76. Adversarial regularizers in inverse problems. Advances in neural information processing systems, 31, 2018.
  77. Optimal transport mapping via input convex neural networks. In International Conference on Machine Learning, pp.  6672–6681. PMLR, 2020.
  78. Stéphane Mallat. A wavelet tour of signal processing. Elsevier, 1999.
  79. Neural proximal gradient descent for compressive imaging. Advances in Neural Information Processing Systems, 31, 2018.
  80. Convolutional neural networks for inverse problems in imaging: A review. IEEE Signal Processing Magazine, 34(6):85–95, 2017.
  81. C McCollough. Tu-fg-207a-04: overview of the low dose ct grand challenge. Medical physics, 43(6Part35):3759–3760, 2016.
  82. Learning proximal operators: Using denoising networks for regularizing inverse imaging problems. In Proceedings of the IEEE International Conference on Computer Vision, pp.  1781–1790, 2017.
  83. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Processing Magazine, 38(2):18–44, March 2021. ISSN 1558-0792. doi: 10.1109/MSP.2020.3016905. URL http://dx.doi.org/10.1109/MSP.2020.3016905.
  84. Jean-Jacques Moreau. Proximité et dualité dans un espace hilbertien. Bulletin de la Société mathématique de France, 93:273–299, 1965.
  85. Learned convex regularizers for inverse problems. arXiv preprint arXiv:2008.02839, 2020.
  86. End-to-end reconstruction meets data-driven regularization for inverse problems. Advances in Neural Information Processing Systems, 34:21413–21425, 2021.
  87. Deep learning techniques for inverse problems in imaging. IEEE Journal on Selected Areas in Information Theory, 1(1):39–56, 2020.
  88. Input convex gradient networks. arXiv preprint arXiv:2111.12187, 2021.
  89. R Tyrell Rockafellar and Roger J-B Wets. Variational Analysis. Grundlehren der mathematischen Wissenschaften. Springer-Verlag Berlin Heidelberg, 1 edition, 1998. ISBN 9783642024313, 9783540627722. doi: 10.1007/978-3-642-02431-3. URL https://www.springer.com/us/book/9783540627722.
  90. Boosting of image denoising algorithms. SIAM Journal on Imaging Sciences, 8(2):1187–1219, 2015.
  91. The little engine that could: Regularization by denoising (red). SIAM Journal on Imaging Sciences, 10(4):1804–1844, 2017.
  92. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, 60(1-4):259–268, 1992.
  93. Walter Rudin. Principles of mathematical analysis. McGraw-Hill, New York, 3 edition, 1976. ISBN 9780070542358. URL https://openlibrary.org/books/OL5195991M.opds.
  94. Plug-and-Play methods provably converge with properly trained denoisers. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp.  5546–5557. PMLR, 2019. URL http://proceedings.mlr.press/v97/ryu19a.html.
  95. Unrolled ippg: Video heart rate estimation via unrolling proximal gradient descent. In 2023 IEEE International Conference on Image Processing (ICIP), pp.  2715–2719. IEEE, 2023.
  96. Plug-and-Play priors for bright field electron tomography and sparse interpolation. IEEE Transactions on Computational Imaging, 2(4):408–423, December 2016. ISSN 2333-9403. doi: 10.1109/TCI.2016.2599778. URL http://dx.doi.org/10.1109/TCI.2016.2599778.
  97. Real Analysis: Measure Theory, Integration, and Hilbert Spaces. Princeton University Press, April 2005. ISBN 9780691113869. URL https://play.google.com/store/books/details?id=DLumDwAAQBAJ.
  98. Image denoising through multi-scale learnt dictionaries. In 2014 IEEE International Conference on Image Processing (ICIP), pp.  808–812. IEEE, 2014.
  99. On multi-layer basis pursuit, efficient algorithms and convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence, 42(8):1968–1980, 2019.
  100. Adversarial robustness of supervised sparse coding. Advances in neural information processing systems, 33:2110–2121, 2020.
  101. He Sun and Katherine L Bouman. Deep probabilistic imaging: Uncertainty quantification and multi-modal solution characterization for computational imaging. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp.  2628–2637, 2021.
  102. An online Plug-and-Play algorithm for regularized image reconstruction. IEEE Transactions on Computational Imaging, 5(3):395–408, September 2019. ISSN 2333-9403. doi: 10.1109/TCI.2019.2893568. URL http://dx.doi.org/10.1109/TCI.2019.2893568.
  103. Scalable Plug-and-Play ADMM with convergence guarantees. IEEE Transactions on Computational Imaging, 7:849–863, 2021. ISSN 2333-9403. doi: 10.1109/TCI.2021.3094062. URL http://dx.doi.org/10.1109/TCI.2021.3094062.
  104. Real-time 3d reconstruction from single-photon lidar data using plug-and-play point cloud denoisers. Nature communications, 10(1):4984, 2019.
  105. Unsupervised learning from incomplete measurements for inverse problems. Advances in Neural Information Processing Systems, 35:4983–4995, 2022.
  106. Sensing theorems for unsupervised learning in linear inverse problems. Journal of Machine Learning Research, 24(39):1–45, 2023.
  107. Provably convergent Plug-and-Play Quasi-Newton methods. March 2023. URL http://arxiv.org/abs/2303.07271.
  108. How to trust your diffusion model: A convex optimization approach to conformal risk control. In International Conference on Machine Learning, pp.  33940–33960. PMLR, 2023.
  109. A convergent image fusion algorithm using Scene-Adapted Gaussian-Mixture-Based denoising. IEEE transactions on image processing: a publication of the IEEE Signal Processing Society, September 2018. ISSN 1057-7149, 1941-0042. doi: 10.1109/TIP.2018.2869727. URL http://dx.doi.org/10.1109/TIP.2018.2869727.
  110. Douglas–Rachford splitting and ADMM for nonconvex optimization: Tight convergence results. SIAM journal on optimization: a publication of the Society for Industrial and Applied Mathematics, 30(1):149–181, January 2020. ISSN 1052-6234. doi: 10.1137/18M1163993. URL https://doi.org/10.1137/18M1163993.
  111. Deep learning on image denoising: An overview. Neural Networks, 131:251–275, 2020.
  112. Solutions of ill-posed problems. vh winston & sons, 1977.
  113. Unrolled compressed blind-deconvolution. IEEE Transactions on Signal Processing, 2023.
  114. Lou Van den Dries. Tame Topology and O-minimal Structures. Cambridge University Press, May 1998. ISBN 9780521598385. doi: 10.1017/CBO9780511525919. URL https://play.google.com/store/books/details?id=CLnElinpjOgC.
  115. Lou van den Dries and Chris Miller. On the real exponential field with restricted analytic functions. Israel Journal of Mathematics, 85(1):19–56, February 1994. ISSN 0021-2172, 1565-8511. doi: 10.1007/BF02758635. URL https://doi.org/10.1007/BF02758635.
  116. Lou van den Dries and Chris Miller. Geometric categories and o-minimal structures. Duke Mathematical Journal, 84(2):497–540, August 1996. ISSN 0012-7094, 1547-7398. doi: 10.1215/S0012-7094-96-08416-1. URL https://projecteuclid.org/journals/duke-mathematical-journal/volume-84/issue-2/Geometric-categories-and-o-minimal-structures/10.1215/S0012-7094-96-08416-1.full.
  117. Plug-and-play priors for model based reconstruction. In 2013 IEEE Global Conference on Signal and Information Processing, pp.  945–948. IEEE, 2013.
  118. The evolution of image reconstruction for ct—from filtered back projection to artificial intelligence. European radiology, 29:2185–2195, 2019.
  119. Provable convergence of Plug-and-Play priors with MMSE denoisers. IEEE Signal Processing Letters, 27:1280–1284, 2020. ISSN 1558-2361. doi: 10.1109/LSP.2020.3006390. URL http://dx.doi.org/10.1109/LSP.2020.3006390.
  120. Self equivalence of the alternating direction method of multipliers. In Roland Glowinski, Stanley J Osher, and Wotao Yin (eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, pp.  165–194. Springer International Publishing, Cham, 2016. ISBN 9783319415895. doi: 10.1007/978-3-319-41589-5“˙5. URL https://doi.org/10.1007/978-3-319-41589-5_5.
  121. Deep iterative down-up cnn for image denoising. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp.  0–0, 2019.
  122. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26(7):3142–3155, 2017a.
  123. Learning deep cnn denoiser prior for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  3929–3938, 2017b.
  124. Deep unfolding network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  3217–3226, 2020.
  125. Plug-and-play image restoration with deep denoiser prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):6360–6376, 2021.
  126. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  586–595, 2018.
  127. Image reconstruction by domain-transform manifold learning. Nature, 555(7697):487–492, 2018.
  128. Deep equilibrium learning of explicit regularizers for imaging inverse problems. arXiv preprint arXiv:2303.05386, 2023.
Citations (7)

Summary

We haven't generated a summary for this paper yet.