Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep, convergent, unrolled half-quadratic splitting for image deconvolution (2402.12872v2)

Published 20 Feb 2024 in eess.IV and eess.SP

Abstract: In recent years, algorithm unrolling has emerged as a powerful technique for designing interpretable neural networks based on iterative algorithms. Imaging inverse problems have particularly benefited from unrolling-based deep network design since many traditional model-based approaches rely on iterative optimization. Despite exciting progress, typical unrolling approaches heuristically design layer-specific convolution weights to improve performance. Crucially, convergence properties of the underlying iterative algorithm are lost once layer-specific parameters are learned from training data. We propose an unrolling technique that breaks the trade-off between retaining algorithm properties while simultaneously enhancing performance. We focus on image deblurring and unrolling the widely-applied Half-Quadratic Splitting (HQS) algorithm. We develop a new parametrization scheme which enforces layer-specific parameters to asymptotically approach certain fixed points. Through extensive experimental studies, we verify that our approach achieves competitive performance with state-of-the-art unrolled layer-specific learning and significantly improves over the traditional HQS algorithm. We further establish convergence of the proposed unrolled network as the number of layers approaches infinity, and characterize its convergence rate. Our experimental verification involves simulations that validate the analytical results as well as comparison with state-of-the-art non-blind deblurring techniques on benchmark datasets. The merits of the proposed convergent unrolled network are established over competing alternatives, especially in the regime of limited training.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (63)
  1. “RobustLoc: Robust camera pose regression in challenging driving environments,” in Proc. AAAI Conf. Artif. Intell., 2022, pp. 6209–6216.
  2. “Robustmat: Neural diffusion for street landmark patch matching under challenging environments,” IEEE Trans. Image Process., vol. 32, pp. 5550–5563, 2023.
  3. “A convergent neural network for non-blind image deblurring,” in Proc. IEEE Int. Conf. Image Process. IEEE, 2023, pp. 1505–1509.
  4. D. Perrone and P. Favaro, “A clearer picture of total variation blind deconvolution,,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 6, pp. 1041–1055, Jun. 2016.
  5. “Deblurring images via dark channel prior,,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 10, pp. 2315–2328, Oct. 2018.
  6. “Scale-recurrent network for deep image deblurring,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 8174–8182.
  7. “Blind image deblurring with gaussian curvature of the image surface,” Signal Process.: Image Commun., vol. 100, pp. 116531, 2022.
  8. N. Wiener, Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications, The MIT press, 1949.
  9. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Amer, vol. 62, no. 1, pp. 55–59, 1972.
  10. “Nonlinear total variation based noise removal algorithms,” Physica D: nonlinear phenomena, vol. 60, no. 1-4, pp. 259–268, 1992.
  11. “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics, vol. 26, no. 3, pp. 70–es, 2007.
  12. “Understanding blind deconvolution algorithms,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2354–2367, Dec. 2011.
  13. “Adversarial robustness in graph neural networks: A Hamiltonian energy conservation approach,” in Adv. Neural Inf. Process. Syst., New Orleans, USA, Dec. 2023.
  14. “Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks,” in Advances in Neural Information Processing Systems (NeurIPS), virtual, Dec. 2021.
  15. D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” in Adv. Neural Inform. Process. Syst., 2009, vol. 22.
  16. “Bm3d frames and variational image deblurring,” IEEE Trans. Image Process., vol. 21, no. 4, pp. 1715–1728, 2011.
  17. “Collaborative filtering of correlated noise: Exact transform-domain variance for improved shrinkage and patch matching,” IEEE Trans. Image Process., pp. 8339–8354, 2020.
  18. D. Zoran and Y. Weiss, “From learning models of natural image patches to whole image restoration,” in Int. Conf. Comput. Vis. IEEE, 2011, pp. 479–486.
  19. U. Schmidt and S. Roth, “Shrinkage fields for effective image restoration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2014, pp. 2774–2781.
  20. “Plug-and-play unplugged: Optimization-free reconstruction using consensus equilibrium,” SIAM J. Imaging Sci., vol. 11, no. 3, pp. 2001–2020, 2018.
  21. “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, July 2017.
  22. “Deep convolutional neural network for image deconvolution,” in Proc. 27th Int. Conf. Neural Inf. Process. Syst., 2014, pp. 1790–1798.
  23. “Deep non-blind deconvolution via generalized low-rank approximation,” in Adv. Neural Inform. Process. Syst., 2018, vol. 31.
  24. “A machine learning approach for non-blind image deconvolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2013, pp. 1067–1074.
  25. “Learning deep cnn denoiser prior for image restoration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 2808–2817.
  26. “Learning fully convolutional networks for iterative non-blind deconvolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 3817–3825.
  27. “Accurate image super-resolution using very deep convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 1646–1654.
  28. R. Wang and D. Tao, “Training very deep cnns for general non-blind deconvolution,” IEEE Trans. Image Process., vol. 27, no. 6, pp. 2897–2910, 2018.
  29. H. Son and S. Lee, “Fast non-blind deconvolution via regularized residual networks with long/short skip-connections,” in IEEE Int. Conf. Comput. Photography. IEEE, 2017, pp. 1–10.
  30. “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
  31. “Learning spatially-variant map models for non-blind image deblurring,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 4886–4895.
  32. “Dwdn: Deep wiener deconvolution network for non-blind image deblurring,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 12, pp. 9960–9976, Dec. 2022.
  33. “A robust non-blind deblurring method using deep denoiser prior,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), 2022, pp. 735–744.
  34. “Infwide: Image and feature space wiener deconvolution network for non-blind image deblurring in low-light conditions,” IEEE Trans. Image Process., vol. 32, pp. 1390–1402, 2023.
  35. “Modl: Model-based deep learning architecture for inverse problems,” IEEE Trans. Med. Imaging, vol. 38, no. 2, pp. 394–405, 2019.
  36. K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proc. 27th Int. Conf. Mach. Learn., 2010, pp. 399–406.
  37. “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,” IEEE Signal Process. Magazine, vol. 38, no. 2, pp. 18–44, 2021.
  38. “An algorithm unrolling approach to deep image deblurring,” in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Process. 2019, IEEE.
  39. “Efficient and interpretable deep blind image deblurring via algorithm unrolling,” IEEE Trans. Comput. Imaging, vol. 6, pp. 666–681, Jan. 2020.
  40. “Model-based deep learning: On the intersection of deep learning and optimization,” IEEE Access, vol. 10, pp. 115384–115398, 2022.
  41. “Unrolled variational bayesian algorithm for image blind deconvolution,” IEEE Trans. Image Process., vol. 32, pp. 430–445, 2022.
  42. G. Richmond and A. Cole-Rhodes, “Non-uniform blind image deblurring using an algorithm unrolling neural network,” in 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), 2022, pp. 1–5.
  43. “Photon limited non-blind deblurring using algorithm unrolling,” IEEE Trans. Comput. Imaging, vol. 8, pp. 851–864, Sep. 2022.
  44. “Deep unfolding: Model-based inspiration of novel deep architectures,” arXiv preprint arXiv:1409.2574, 2014.
  45. “A new alternating minimization algorithm for total variation image reconstruction,” SIAM J. Imaging Sci., vol. 1, no. 3, pp. 248–272, 2008.
  46. L. O Jay, “A note on q-order of convergence,” BIT Numerical Mathematics, vol. 41, pp. 422–429, 2001.
  47. “An unrolled half-quadratic approach for sparse signal recovery in spectroscopy,” Signal Process.
  48. “Theoretical linear convergence of unfolded ista and its practical weights and thresholds,” in Proc. Adv. Neural Inf. Process. Syst., 2018, vol. 31.
  49. “Alista: Analytic weights are as good as learned weights in lista,” in Proc. Int. Conf. Learn. Represent., 2019.
  50. D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Magazine, vol. 13, no. 3, pp. 43–64, 1996.
  51. T Goldstein and S Osher, “The split bregman method for l1-regularized problems,” SIAM J. Imaging Sci., vol. 2, no. 2, pp. 323–343, 2009.
  52. “Momentum-net: Fast and convergent iterative neural network for inverse problems,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 4, pp. 4915–4931, 2023.
  53. A. Pandey and D. Wang, “A new framework for cnn-based speech enhancement in the time domain,” IEEE/ACM Trans. Audio, Speech, and Language Process., vol. 27, no. 7, pp. 1179–1188, 2019.
  54. “A late fusion cnn for digital matting,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 7469–7478.
  55. “L1loss — pytorch 2.1 documentation. (2023),” https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html, Accessed: 2023-11-10.
  56. “Loss functions for image restoration with neural networks,” IEEE Trans. Comput. Imaging, vol. 3, no. 1, pp. 47–57, 2016.
  57. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  58. “Microsoft coco: Common objects in context,” in Proc. Europ. Conf. Comput. Vis., 2014, pp. 740–755.
  59. “Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database.,” in Proc. Europ. Conf. Comput. Vis., 2012, pp. 27–40.
  60. “The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results,” http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html.
  61. “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
  62. W. Cheney and A. A Goldstein, “Proximity maps for convex sets,” Proc. Am. Math. Soc., vol. 10, no. 3, pp. 448–450, 1959.
  63. P. L. Combettes, “Quasi-fejérian analysis of some optimization algorithms,” in Inherently Parallel Algorithms in Feasibility and Optimization and their Applications. 2001, vol. 8, pp. 115–152, New York: Elsevier.

Summary

We haven't generated a summary for this paper yet.