Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Weakly Convex Regularizers for Convergent Image-Reconstruction Algorithms (2308.10542v2)

Published 21 Aug 2023 in eess.IV, cs.CV, and cs.LG

Abstract: We propose to learn non-convex regularizers with a prescribed upper bound on their weak-convexity modulus. Such regularizers give rise to variational denoisers that minimize a convex energy. They rely on few parameters (less than 15,000) and offer a signal-processing interpretation as they mimic handcrafted sparsity-promoting regularizers. Through numerical experiments, we show that such denoisers outperform convex-regularization methods as well as the popular BM3D denoiser. Additionally, the learned regularizer can be deployed to solve inverse problems with iterative schemes that provably converge. For both CT and MRI reconstruction, the regularizer generalizes well and offers an excellent tradeoff between performance, number of parameters, guarantees, and interpretability when compared to other data-driven approaches.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (57)
  1. Linearly involved generalized Moreau enhanced models and their proximal splitting algorithm under overall convexity condition. Inverse Problems, 36(3):035012, 2020.
  2. Contour detection and hierarchical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5):898–916, 2011.
  3. Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Mathematical Programming, 137:91–129, 2013.
  4. Deep equilibrium models. In Advances in Neural Information Processing Systems, volume 32, 2019.
  5. A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
  6. T. Blu and F. Luisier. The sure-let approach to image denoising. IEEE Transactions on Image Processing, 16(11):2778–2786, 2007.
  7. Learning activation functions in deep (spline) neural networks. IEEE Open Journal of Signal Processing, 1:295–309, 2020.
  8. Learning Lipschitz-controlled activation functions in neural networks for Plug-and-Play image reconstruction methods. In NeurIPS 2021 Workshop on Deep Learning and Inverse Problems, 2021.
  9. Adaptive wavelet thresholding for image denoising and compression. IEEE Transactions on Image Processing, 9(9):1532–1546, 2000.
  10. Y. Chen and T. Pock. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1256–1272, 2016.
  11. Insights into analysis operator learning: From patch-based sparse models to higher order MRFs. IEEE Transactions on Image Processing, 23(3):1060–72, 2014.
  12. It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. In Advances in Neural Information Processing Systems, volume 34, 2021.
  13. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16(8):2080–2095, 2007.
  14. D. Donoho. De-noising by soft-thresholding. IEEE Transactions on Information Theory, 41(3):613–627, 1995.
  15. D. L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, 2006.
  16. Improving Lipschitz-constrained neural networks by learning activation functions. arXiv:2210.16222, 2022.
  17. S. Ghadimi and G. Lan. Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Mathematical Programming, 156(1-2, Ser. A):59–99, 2016.
  18. The role of depth, width, and activation complexity in the number of linear regions of neural networks, 2022.
  19. A neural-network-based convex regularizer for inverse problems. IEEE Transactions on Computational Imaging, 9:781–795, 2023.
  20. R. Gribonval and M. Nikolova. A characterization of proximity operators. Journal of Mathematical Imaging and Vision, 62(6-7):773–789, 2020.
  21. Parseval proximal neural networks. Journal of Fourier Analysis and Applications, 26:59, 2020.
  22. A relaxed proximal gradient descent algorithm for convergent plug-and-play with proximal denoiser. In Scale Space and Variational Methods in Computer Vision, pages 379–392. Springer, 2023.
  23. Gradient step denoiser for convergent Plug-and-Play. In International Conference on Learning Representations, 2022.
  24. Proximal denoiser for convergent Plug-and-Play optimization with nonconvex regularization. In 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 9483–9505. PMLR, 2022.
  25. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR, 2015.
  26. fastMRI: A publicly available raw k-space and DICOM dataset of knee images for accelerated MR image reconstruction using machine learning. Radiology: Artificial Intelligence, 2(1), 2020.
  27. Total deep variation for linear inverse problems. In Conference on Computer Vision and Pattern Recognition. IEEE, 2020.
  28. Variational networks: Connecting variational methods and deep learning. In Pattern Recognition, pages 281–293, 2017.
  29. Noise reduction using an undecimated discrete wavelet transform. IEEE Signal Processing Letters, 3(1):10–12, 1996.
  30. Sparsity-inducing nonconvex nonseparable regularization for convex image processing. SIAM Journal on Imaging Sciences, 12(2):1099–1134, 2019.
  31. Convex Non-Convex Variational Models, pages 1–57. Springer International Publishing, Cham, 2021.
  32. First-order methods almost always avoid strict saddle points. Mathematical Programming, 176:311–337, 2019.
  33. NETT: Solving inverse problems with deep neural networks. Inverse Problems, 36(6):065005, 2020.
  34. Plug-and-Play ADMM for MRI reconstruction with convex nonconvex sparse regularization. IEEE Access, 9:148315–148324, 2021.
  35. Adversarial regularizers in inverse problems. In Advances in Neural Information Processing Systems, volume 31, 2018.
  36. M. T. McCann and M. Unser. Biomedical image reconstruction: From the foundations to deep neural networks. Foundations and Trends® in Signal Processing, 13(3):283–359, 2019.
  37. C. McCollough. TU-FG-207A-04: Overview of the low dose CT Grand Challenge. Medical Physics, 43(6Part35):3759–3760, 2016.
  38. Learned convex regularizers for inverse problems. arXiv:2008.02839, 2021.
  39. Learned reconstruction methods with convergence guarantees: A survey of concepts and applications. IEEE Signal Processing Magazine, 40(1):164–182, 2023.
  40. Learning convex regularizers satisfying the variational source condition for inverse problems. In NeurIPS Workshop on Deep Learning and Inverse Problems, 2021.
  41. Y. E. Nesterov. A method of solving a convex programming problem with convergence rate o(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). Doklady Akademii Nauk, 269(3):543–547, 1983.
  42. Approximation of Lipschitz functions using deep spline neural networks. SIAM Journal on Mathematics of Data Science, 5(2):306–322, 2023.
  43. M. Nikolova. Energy minimization methods. In Handbook of Mathematical Methods in Imaging, pages 157–204. Springer, New York, 2015.
  44. P. Ochs. Local convergence of the heavy-ball method and iPiano for non-convex optimization. Journal of Optimization Theory and Applications, 177(1):153–180, 2018.
  45. iPiano: Inertial proximal algorithm for nonconvex optimization. SIAM Journal on Imaging Sciences, 7(2):1388–1419, 2014.
  46. B. O’Donoghue and E. Candès. Adaptive restart for accelerated gradient schemes. Foundations of Computational Mathematics, 15(3):715–732, 2015.
  47. A. Parekh and I. W. Selesnick. Convex denoising using non-convex tight frame regularization. IEEE Signal Processing Letters, 22(10):1786–1790, 2015.
  48. A. Ribes and F. Schmitt. Linear inverse problems in imaging. IEEE Signal Processing Magazine, 25(4):84–99, 2008.
  49. S. Roth and M. J. Black. Fields of experts. International Journal of Computer Vision, 82(2):205–229, 2009.
  50. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1-4):259–268, 1992.
  51. A CNC approach for directional total variation. In 30th European Signal Processing Conference, pages 488–492. IEEE, 2022.
  52. The singular values of convolutional layers. In International Conference on Learning Representations, 2019.
  53. A novel dictionary learning method for sparse representation with nonconvex regularizations. Neurocomputing, 417:128–141, 2020.
  54. A. N. Tikhonov. Solution of incorrectly formulated problems and the regularization method. Soviet Mathematics, 4:1035–1038, 1963.
  55. Plug-and-play image restoration with deep denoiser prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):6360–6376, 2022.
  56. Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging, 3(1):47–57, 2017.
  57. Total variation denoising with non-convex regularizers. IEEE Access, 7:4422–4431, 2019.
Citations (17)

Summary

We haven't generated a summary for this paper yet.