Papers
Topics
Authors
Recent
2000 character limit reached

Learned Regularization for Inverse Problems: Insights from a Spectral Model (2312.09845v2)

Published 15 Dec 2023 in math.NA, cs.LG, and cs.NA

Abstract: In this chapter we provide a theoretically founded investigation of state-of-the-art learning approaches for inverse problems from the point of view of spectral reconstruction operators. We give an extended definition of regularization methods and their convergence in terms of the underlying data distributions, which paves the way for future theoretical studies. Based on a simple spectral learning model previously introduced for supervised learning, we investigate some key properties of different learning paradigms for inverse problems, which can be formulated independently of specific architectures. In particular we investigate the regularization properties, bias, and critical dependence on training data distributions. Moreover, our framework allows to highlight and compare the specific behavior of the different paradigms in the infinite-dimensional limit.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Learned primal-dual reconstruction. IEEE Transactions on Medical Imaging, 37(6):1322–1332, 2018.
  2. Learning the optimal tikhonov regularizer for inverse problems. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 25205–25216, Inc., 2021. Curran Associates.
  3. Invertible residual networks in the context of regularization theory for linear inverse problems. arXiv preprint arXiv:2306.01335, 2023.
  4. Bayesian view on the training of invertible residual networks for solving linear inverse problems. arXiv preprint arXiv:2307.10431, 2023.
  5. Solving inverse problems using data-driven models. Acta Numerica, 28:1–174, 2019.
  6. Learning spectral regularizations for linear inverse problems. In NeurIPS 2020 Workshop on Deep Learning and Inverse Problems, 2020.
  7. Modern regularization methods for inverse problems. Acta Numerica, 27:1–111, 2018.
  8. Deep neural networks for inverse problems with pseudodifferential operators: An application to limited-angle tomography. SIAM Journal on Imaging Sciences, 14(2):470–505, 2021.
  9. Designing optimal spectral filters for inverse problems. SIAM Journal on Scientific Computing, 33(6):3132–3152, 2011.
  10. Regularization of inverse problems by filtered diagonal frame decomposition. Applied and Computational Harmonic Analysis, 62:66–83, 2023.
  11. Plug-and-play image reconstruction is a convergent regularization method, 2022.
  12. Regularization of inverse problems, volume 375. Kluwer, Dordrecht, 1996.
  13. Joel N Franklin. Well-posed stochastic extensions of ill-posed linear problems. Journal of Mathematical Analysis and Applications, 31(3):682–716, 1970.
  14. On regularization via frame decompositions with applications in tomography. Inverse Problems, 38(5):055003, apr 2022.
  15. Deep learning for undersampled mri reconstruction. Physics in Medicine & Biology, 63(13):135007, 2018.
  16. Deep convolutional neural network for inverse problems in imaging. IEEE Transactions on Image Processing, 26(9):4509–4522, 2017.
  17. Convergent data-driven regularizations for ct reconstruction. arXiv:2212.07786, 2022.
  18. Analysis of regularized inversion of data corrupted by white gaussian noise. Inverse Problems, 30(4):045009, mar 2014.
  19. LoDoPaB-CT, a benchmark dataset for low-dose computed tomography reconstruction. Scientific Data, 8(1), apr 2021.
  20. Dolce: A model-based probabilistic diffusion framework for limited-angle ct reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 10498–10508, October 2023.
  21. Adversarial regularizers in inverse problems. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
  22. Learning proximal operators: Using denoising networks for regularizing inverse imaging problems. In ICCV, pages 1781–1790, 2017.
  23. Learning convex regularizers satisfying the variational source condition for inverse problems. In NeurIPS 2021 Workshop on Deep Learning and Inverse Problems, 2021.
  24. The little engine that could: Regularization by denoising (red). SIAM Journal on Imaging Sciences, 10(4):1804–1844, 2017.
  25. Deep null space learning for inverse problems: convergence analysis and rates. Inverse Problems, 35(2):025008, 2019.
  26. Plug-and-play priors for model based reconstruction. In 2013 IEEE Global Conference on Signal and Information Processing, pages 945–948, 2013.
  27. Lipschitz regularity of deep neural networks: analysis and efficient estimation. Advances in Neural Information Processing Systems, 31, 2018.
  28. Fista-net: Learning a fast iterative shrinkage thresholding network for inverse problems in imaging. IEEE Transactions on Medical Imaging, 40(5):1329–1339, 2021.

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.