Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures (2403.04847v2)

Published 7 Mar 2024 in cs.LG and eess.SP

Abstract: Model-based deep learning methods such as loop unrolling (LU) and deep equilibrium model}(DEQ) extensions offer outstanding performance in solving inverse problems (IP). These methods unroll the optimization iterations into a sequence of neural networks that in effect learn a regularization function from data. While these architectures are currently state-of-the-art in numerous applications, their success heavily relies on the accuracy of the forward model. This assumption can be limiting in many physical applications due to model simplifications or uncertainties in the apparatus. To address forward model mismatch, we introduce an untrained forward model residual block within the model-based architecture to match the data consistency in the measurement domain for each instance. We propose two variants in well-known model-based architectures (LU and DEQ) and prove convergence under mild conditions. Our approach offers a unified solution that is less parameter-sensitive, requires no additional data, and enables simultaneous fitting of the forward model and reconstruction in a single pass, benefiting both linear and nonlinear inverse problems. The experiments show significant quality improvement in removing artifacts and preserving details across three distinct applications, encompassing both linear and nonlinear inverse problems. Moreover, we highlight reconstruction effectiveness in intermediate steps and showcase robustness to random initialization of the residual block and a higher number of iterations during evaluation. Code is available at \texttt{https://github.com/InvProbs/A-adaptive-model-based-methods}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Learned primal-dual reconstruction. IEEE Transactions on Medical Imaging, 37(6):1322–1332, 2018.
  2. Solving inverse problems using data-driven models. Acta Numerica, 28:1–174, 2019.
  3. Deep equilibrium models. Advances in Neural Information Processing Systems, 32, 2019.
  4. Restoration of manifold-valued images by half-quadratic minimization. arXiv preprint arXiv:1505.07029, 2015.
  5. Blind motion deblurring from a single image using sparse approximation. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp.  104–111. IEEE, 2009.
  6. Fast motion deblurring. In ACM SIGGRAPH Asia 2009 papers, pp.  1–8. 2009.
  7. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.  3213–3223, 2016.
  8. Removing camera shake from a single photograph. In Acm Siggraph 2006 Papers, pp.  787–794. 2006.
  9. Fixed point networks: Implicit depth models with jacobian-free backprop. arXiv e-print. arXiv:2103.12803, 2021.
  10. D. Geman and Chengda Yang. Nonlinear image recovery with half-quadratic regularization. IEEE Transactions on Image Processing, 4(7):932–946, 1995. doi: 10.1109/83.392335.
  11. Deep equilibrium architectures for inverse problems in imaging. IEEE Transactions on Computational Imaging, 7:1123–1133, 2021a.
  12. Model adaptation for inverse problems in imaging. IEEE Transactions on Computational Imaging, 7:661–674, 2021b.
  13. Understanding the difficulty of training deep feedforward neural networks. In Yee Whye Teh and Mike Titterington (eds.), Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pp.  249–256, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR.
  14. Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on Machine Learning, pp.  399–406, 2010.
  15. Loop unrolled shallow equilibrium regularizer (luser)–a memory-efficient inverse problem solver. arXiv preprint arXiv:2210.04987, 2022.
  16. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, 2015.
  17. Deep unfolding: Model-based inspiration of novel deep architectures. arXiv preprint arXiv:1409.2574, 2014.
  18. Robustness of deep equilibrium architectures to changes in the measurement model. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.  1–5. IEEE, 2023.
  19. Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization. In International Conference on Machine Learning, pp.  9483–9505. PMLR, 2022.
  20. Sparse multichannel blind deconvolution of seismic data via spectral projected-gradient. IEEE Access, 7:23740–23751, 2019. doi: 10.1109/ACCESS.2019.2899131.
  21. Deep seismic cs: A deep learning assisted compressive sensing for seismic data. IEEE Transactions on Geoscience and Remote Sensing, 61:1–9, 2023. doi: 10.1109/TGRS.2023.3289917.
  22. Learning provably robust estimators for inverse problems via jittering. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  23. Understanding and evaluating blind deconvolution algorithms. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp.  1964–1971. IEEE, 2009.
  24. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
  25. Expectation maximization approach to data-based fault diagnostics. Information Sciences, 235:80–96, 2013.
  26. Processing of seismic reflection data using matlab™. Synthesis lectures on signal processing, 5(1):1–97, 2011.
  27. Deep learning for handling kernel/model uncertainty in image deconvolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  2388–2397, 2020.
  28. Analysis of half-quadratic minimization methods for signal and image recovery. SIAM Journal on Scientific computing, 27(3):937–966, 2005.
  29. Deep learning techniques for inverse problems in imaging. IEEE Journal on Selected Areas in Information Theory, 1(1):39–56, 2020.
  30. Proximal algorithms. Foundations and Trends in Optimization, 1(3):127–239, 2014. ISSN 2167-3888. doi: 10.1561/2400000003.
  31. Automatic differentiation in pytorch. 2017.
  32. Optimisation of transmission map for improved image defogging. IET Image Processing, 13(7):1161–1169, 2019.
  33. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.  9446–9454, 2018.
  34. Plug-and-play priors for model based reconstruction. In 2013 IEEE Global Conference on Signal and Information Processing, pp.  945–948. IEEE, 2013.
  35. Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis, 49(4):1715–1735, 2011.
  36. Xiaojuan Yang and Li Wang. Fast half-quadratic algorithm for image restoration and reconstruction. Applied Mathematical Modelling, 50:92–104, 2017.
  37. Deep admm-net for compressive sensing mri. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016.
  38. Accuracy of wavelets, seismic inversion, and thin-bed resolution. Interpretation, 5(4):T523–T530, 2017.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com