Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Meta-Prior: Meta learning for Adaptive Inverse Problem Solvers (2311.18710v1)

Published 30 Nov 2023 in cs.CV and cs.LG

Abstract: Deep neural networks have become a foundational tool for addressing imaging inverse problems. They are typically trained for a specific task, with a supervised loss to learn a mapping from the observations to the image to recover. However, real-world imaging challenges often lack ground truth data, rendering traditional supervised approaches ineffective. Moreover, for each new imaging task, a new model needs to be trained from scratch, wasting time and resources. To overcome these limitations, we introduce a novel approach based on meta-learning. Our method trains a meta-model on a diverse set of imaging tasks that allows the model to be efficiently fine-tuned for specific tasks with few fine-tuning steps. We show that the proposed method extends to the unsupervised setting, where no ground truth data is available. In its bilevel formulation, the outer level uses a supervised loss, that evaluates how well the fine-tuned model performs, while the inner loss can be either supervised or unsupervised, relying only on the measurement operator. This allows the meta-model to leverage a few ground truth samples for each task while being able to generalize to new imaging tasks. We show that in simple settings, this approach recovers the Bayes optimal estimator, illustrating the soundness of our approach. We also demonstrate our method's effectiveness on various tasks, including image processing and magnetic resonance imaging.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (70)
  1. Learned primal-dual reconstruction. IEEE transactions on medical imaging, 37(6):1322–1332, 2018.
  2. Modular meta-learning. In Conference on robot learning, pp.  856–868. PMLR, 2018.
  3. Learning to learn by self-critique. Advances in Neural Information Processing Systems, 32, 2019.
  4. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 33(5):898–916, May 2011. ISSN 0162-8828. doi: 10.1109/TPAMI.2010.161. URL http://dx.doi.org/10.1109/TPAMI.2010.161.
  5. Total generalized variation. SIAM Journal on Imaging Sciences, 3(3):492–526, 2010.
  6. Learning the invisible: A hybrid deep learning-shearlet framework for limited angle computed tomography. Inverse Problems, 35(6):064002, 2019.
  7. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  9650–9660, 2021.
  8. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of mathematical imaging and vision, 40:120–145, 2011.
  9. Algorithms for finding global minimizers of image segmentation and denoising models. SIAM journal on applied mathematics, 66(5):1632–1648, 2006.
  10. Robust equivariant imaging: a fully unsupervised framework for learning to image from noisy and partial measurements. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  5647–5656, 2022.
  11. Meta-baseline: Exploring simple meta-learning for few-shot learning. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  9062–9071, 2021a.
  12. Metadelta: A meta-learning system for few-shot image classification. In AAAI Workshop on Meta-Learning and MetaDL Challenge, pp. 17–28. PMLR, 2021b.
  13. A note on lazy training in supervised differentiable programming.(2018). arXiv preprint arXiv:1812.07956, 2018.
  14. Debiased contrastive learning. Advances in neural information processing systems, 33:8765–8775, 2020.
  15. Laurent Condat. A primal–dual splitting method for convex optimization involving lipschitzian, proximable and linear composite terms. Journal of optimization theory and applications, 158(2):460–479, 2013.
  16. Laurent Condat. Discrete total variation: New definition and minimization. SIAM Journal on Imaging Sciences, 10(3):1258–1290, 2017.
  17. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 57(11):1413–1457, 2004.
  18. Integrating prior knowledge in contrastive learning with kernel. In 40 th International Conference on Machine Learning, 2023.
  19. How much meta-learning is in image-to-image translation? In ICLR Blogposts 2023, 2023. URL https://iclr-blogposts.github.io/2023/blog/2023/how-much-meta-learning-is-in-image-to-image-translation/. https://iclr-blogposts.github.io/2023/blog/2023/how-much-meta-learning-is-in-image-to-image-translation/.
  20. Meta-learning of neural architectures for few-shot learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  12365–12375, 2020.
  21. Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning, pp. 1126–1135. PMLR, 2017.
  22. Solving inverse problems with deep neural networks–robustness included? IEEE transactions on pattern analysis and machine intelligence, 45(1):1119–1134, 2022.
  23. Learning sparsity-promoting regularizers using bilevel optimization. arXiv preprint arXiv:2207.08939, 2022.
  24. Deep equilibrium architectures for inverse problems in imaging. IEEE Transactions on Computational Imaging, 7:1123–1133, 2021.
  25. Physics-driven deep learning for computational magnetic resonance imaging: Combining physics and machine learning for improved medical imaging. IEEE Signal Processing Magazine, 40(1):98–114, 2023.
  26. A bilevel approach for parameter learning in inverse problems. Inverse Problems, 34(11):115012, 2018.
  27. Meta-learning in neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(9):5149–5169, 2021.
  28. Gradient step denoiser for convergent plug-and-play. arXiv preprint arXiv:2110.03220, 2021.
  29. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 31, 2018.
  30. Unsupervised meta-learning for few-shot image classification. Advances in neural information processing systems, 32, 2019.
  31. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR), 2015. URL https://arxiv.org/abs/1412.6980.
  32. Deep-learning methods for parallel magnetic resonance imaging reconstruction: A survey of the current approaches, trends, and issues. IEEE signal processing magazine, 37(1):128–140, 2020.
  33. A bilevel optimization approach for parameter learning in variational models. SIAM Journal on Imaging Sciences, 6(2):938–983, 2013.
  34. Pnn: From proximal algorithms to robust unfolded image denoising networks and plug-and-play methods. arXiv preprint arXiv:2308.03139, 2023.
  35. Neumiss networks: differentiable programming for supervised learning with missing values. Advances in Neural Information Processing Systems, 33:5980–5990, 2020.
  36. Ntire 2023 challenge on efficient super-resolution: Methods and results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  1921–1959, 2023.
  37. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  1833–1844, 2021.
  38. Understanding approximate and unrolled dictionary learning for pattern recovery. arXiv preprint arXiv:2106.06338, 2021.
  39. Where prior learning can and can’t work in unsupervised inverse problems, 2023. URL https://openreview.net/forum?id=c2X1Qa9K3bD.
  40. Learned reconstruction methods with convergence guarantees: a survey of concepts and applications. IEEE Signal Processing Magazine, 40(1):164–182, 2023.
  41. Bilevel optimization with nonsmooth lower level problems. In Scale Space and Variational Methods in Computer Vision: 5th International Conference, SSVM 2015, Lège-Cap Ferret, France, May 31-June 4, 2015, Proceedings 5, pp.  654–665. Springer, 2015.
  42. Scalable splitting algorithms for big-data interferometric imaging in the ska era. Monthly Notices of the Royal Astronomical Society, 462(4):4314–4335, 2016.
  43. Learning maximally monotone operators for image recovery. SIAM Journal on Imaging Sciences, 14(3):1206–1237, 2021.
  44. Rapid learning or feature reuse? towards understanding the effectiveness of maml. In International Conference on Learning Representations (ICLR), 2020.
  45. Meta-learning with implicit gradients. Advances in neural information processing systems, 32, 2019.
  46. Xpdnet for mri reconstruction: An application to the 2020 fastmri challenge. arXiv preprint arXiv:2010.07290, 2020.
  47. Nc-pdnet: A density-compensated unrolled network for 2d and 3d non-cartesian mri reconstruction. IEEE Transactions on Medical Imaging, 41(7):1625–1638, 2022.
  48. Regularization of inverse problems: Deep equilibrium models versus bilevel learning. arXiv preprint arXiv:2206.13193, 2022.
  49. The little engine that could: Regularization by denoising (red). SIAM Journal on Imaging Sciences, 10(4):1804–1844, 2017.
  50. Cadda: Class-wise automatic differentiable data augmentation for eeg signals. arXiv preprint arXiv:2106.13695, 2021.
  51. Plug-and-play methods provably converge with properly trained denoisers. In International Conference on Machine Learning, pp. 5546–5557. PMLR, 2019.
  52. Do residual neural networks discretize neural ordinary differential equations? Advances in Neural Information Processing Systems, 35:36520–36532, 2022.
  53. Implicit data crimes: Machine learning bias arising from misuse of public data. Proceedings of the National Academy of Sciences, 119(13):e2117203119, 2022.
  54. Prototypical networks for few-shot learning. Advances in neural information processing systems, 30, 2017.
  55. Prior guided feature enrichment network for few-shot segmentation. IEEE transactions on pattern analysis and machine intelligence, 44(2):1050–1065, 2020.
  56. Plug-and-play priors for model based reconstruction. In 2013 IEEE global conference on signal and information processing, pp.  945–948. IEEE, 2013.
  57. Matching networks for one shot learning. Advances in neural information processing systems, 29, 2016.
  58. Curtis R Vogel. Computational methods for inverse problems. SIAM, 2002.
  59. Teaching matters: Investigating the role of supervision in vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  7486–7496, 2023.
  60. When and why pinns fail to train: A neural tangent kernel perspective. Journal of Computational Physics, 449:110768, 2022.
  61. Meta-learning to detect rare objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  9925–9934, 2019.
  62. Unsupervised meta-learning for few-shot learning. Pattern Recognition, 116:107951, 2021.
  63. Prototype mixture models for few-shot semantic segmentation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VIII 16, pp.  763–778. Springer, 2020.
  64. Emergence of segmentation with minimalistic white-box transformers. arXiv preprint arXiv:2308.16271, 2023.
  65. fastmri: An open dataset and benchmarks for accelerated mri. arXiv preprint arXiv:1811.08839, 2018.
  66. Meta-detr: Image-level few-shot detection with inter-class correlation exploitation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  67. Plug-and-play image restoration with deep denoiser prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):6360–6376, 2021a.
  68. Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  4791–4800, 2021b.
  69. When awgn-based denoiser meets real noises. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp.  13074–13081, 2020.
  70. Denoising diffusion models for plug-and-play image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  1219–1229, 2023.

Summary

We haven't generated a summary for this paper yet.