GLIMPSE: Generalized Local Imaging with MLPs (2401.00816v2)
Abstract: Deep learning is the current de facto state of the art in tomographic imaging. A common approach is to feed the result of a simple inversion, for example the backprojection, to a convolutional neural network (CNN) which then computes the reconstruction. Despite strong results on 'in-distribution' test data similar to the training data, backprojection from sparse-view data delocalizes singularities, so these approaches require a large receptive field to perform well. As a consequence, they overfit to certain global structures which leads to poor generalization on out-of-distribution (OOD) samples. Moreover, their memory complexity and training time scale unfavorably with image resolution, making them impractical for application at realistic clinical resolutions, especially in 3D: a standard U-Net requires a substantial 140GB of memory and 2600 seconds per epoch on a research-grade GPU when training on 1024x1024 images. In this paper, we introduce GLIMPSE, a local processing neural network for computed tomography which reconstructs a pixel value by feeding only the measurements associated with the neighborhood of the pixel to a simple MLP. While achieving comparable or better performance with successful CNNs like the U-Net on in-distribution test data, GLIMPSE significantly outperforms them on OOD samples while maintaining a memory footprint almost independent of image resolution; 5GB memory suffices to train on 1024x1024 images. Further, we built GLIMPSE to be fully differentiable, which enables feats such as recovery of accurate projection angles if they are out of calibration.
- G. Wang, J. C. Ye, and B. De Man, “Deep learning for tomographic image reconstruction,” Nature Machine Intelligence, vol. 2, no. 12, pp. 737–748, 2020.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 2015, pp. 234–241.
- K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509–4522, 2017.
- M. T. McCann, K. H. Jin, and M. Unser, “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 85–95, 2017.
- N. Davoudi, X. L. Deán-Ben, and D. Razansky, “Deep learning optoacoustic tomography with sparse data,” Nature Machine Intelligence, vol. 1, no. 10, pp. 453–460, 2019.
- T. Liu, A. Chaman, D. Belius, and I. Dokmanić, “Learning multiscale convolutional dictionaries for image reconstruction,” IEEE Transactions on Computational Imaging, vol. 8, pp. 425–437, 2022.
- J. Adler and O. Öktem, “Solving ill-posed inverse problems using iterative deep neural networks,” Inverse Problems, vol. 33, no. 12, p. 124007, Nov 2017.
- J. Adler and O. Öktem, “Learned primal-dual reconstruction,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1322–1332, 2018.
- D. Gilton, G. Ongie, and R. Willett, “Neumann networks for linear inverse problems in imaging,” IEEE Transactions on Computational Imaging, vol. 6, pp. 328–343, 2019.
- A. K. Maier, C. Syben, B. Stimpel, T. Würfl, M. Hoffmann, F. Schebesch, W. Fu, L. Mill, L. Kling, and S. Christiansen, “Learning with known operators reduces maximum error bounds,” Nature machine intelligence, vol. 1, no. 8, pp. 373–380, 2019.
- A. Hauptmann, J. Adler, S. Arridge, and O. Öktem, “Multi-scale learned iterative reconstruction,” IEEE Transactions on Computational Imaging, vol. 6, pp. 843–856, 2020.
- Y. B. Sahel, J. P. Bryan, B. Cleary, S. L. Farhi, and Y. C. Eldar, “Deep unrolled recovery in sparse biological imaging,” 2021.
- J. Leuschner, M. Schmidt, P. S. Ganguly, V. Andriiashen, S. B. Coban, A. Denker, D. Bauer, A. Hadjifaradji, K. J. Batenburg, P. Maass, and M. van Eijnatten, “Quantitative comparison of deep learning-based image reconstruction methods for low-dose and sparse-angle CT applications,” Journal of Imaging, vol. 7, no. 3, 2021.
- L. A. Feldkamp, L. C. Davis, and J. W. Kress, “Practical cone-beam algorithm,” Josa a, vol. 1, no. 6, pp. 612–619, 1984.
- H. K. Aggarwal, M. P. Mani, and M. Jacob, “Modl: Model-based deep learning architecture for inverse problems,” IEEE transactions on medical imaging, vol. 38, no. 2, pp. 394–405, 2018.
- B. Hamoud, Y. Bahat, and T. Michaeli, “Beyond local processing: Adapting cnns for ct reconstruction,” in European Conference on Computer Vision. Springer, 2022, pp. 513–526.
- A. Graas, S. B. Coban, K. J. Batenburg, and F. Lucka, “Just-in-time deep learning for real-time x-ray computed tomography,” Scientific Reports, vol. 13, no. 1, p. 20070, 2023.
- A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm,” Ultrasonic imaging, vol. 6, no. 1, pp. 81–94, 1984.
- S. Lunz, A. Hauptmann, T. Tarvainen, C.-B. Schonlieb, and S. Arridge, “On learned operator correction in inverse problems,” SIAM Journal on Imaging Sciences, vol. 14, no. 1, pp. 92–127, 2021.
- S. Gupta, K. Kothari, V. Debarnot, and I. Dokmanić, “Differentiable uncalibrated imaging,” IEEE Transactions on Computational Imaging, 2023.
- E. Kang, J. Min, and J. C. Ye, “A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction,” Medical physics, vol. 44, no. 10, pp. e360–e375, 2017.
- A. Khorashadizadeh, K. Kothari, L. Salsi, A. A. Harandi, M. de Hoop, and I. Dokmanić, “Conditional injective flows for bayesian imaging,” IEEE Transactions on Computational Imaging, vol. 9, pp. 224–237, 2023.
- G. H. Golub and C. F. Van Loan, “An analysis of the total least squares problem,” SIAM journal on numerical analysis, vol. 17, no. 6, pp. 883–893, 1980.
- I. Markovsky and S. Van Huffel, “Overview of total least-squares methods,” Signal processing, vol. 87, no. 10, pp. 2283–2302, 2007.
- S. Gupta and I. Dokmanić, “Total least squares phase retrieval,” IEEE Transactions on Signal Processing, vol. 70, pp. 536–549, 2021.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
- I. O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. Steiner, D. Keysers, J. Uszkoreit et al., “Mlp-mixer: An all-mlp architecture for vision,” Advances in neural information processing systems, vol. 34, pp. 24 261–24 272, 2021.
- Z. Pan, B. Zhuang, J. Liu, H. He, and J. Cai, “Scalable vision transformers with hierarchical pooling,” in Proceedings of the IEEE/cvf international conference on computer vision, 2021, pp. 377–386.
- Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 17 683–17 693.
- G. Bachmann, S. Anagnostidis, and T. Hofmann, “Scaling mlps: A tale of inductive bias,” arXiv preprint arXiv:2306.13575, 2023.
- F. Altekrüger, A. Denker, P. Hagemann, J. Hertrich, P. Maass, and G. Steidl, “Patchnr: Learning from small data by patch normalizing flow regularization,” arXiv preprint arXiv:2205.12021, 2022.
- A. Khorashadizadeh, A. Chaman, V. Debarnot, and I. Dokmanić, “Funknn: Neural interpolation for functional generation,” in ICLR, 2023.
- G. Wang, H. Yu, and B. De Man, “An outlook on x-ray ct research and development,” Medical physics, vol. 35, no. 3, pp. 1051–1064, 2008.
- L. De Chiffre, S. Carmignato, J.-P. Kruth, R. Schmitt, and A. Weckenmann, “Industrial applications of computed tomography,” CIRP annals, vol. 63, no. 2, pp. 655–677, 2014.
- K. Wells and D. Bradley, “A review of x-ray explosives detection techniques for checked baggage,” Applied Radiation and Isotopes, vol. 70, no. 8, pp. 1729–1746, 2012.
- A. Hauptmann and J. Poimala, “Model-corrected learned primal-dual models for fast limited-view photoacoustic tomography,” arXiv preprint arXiv:2304.01963, 2023.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
- J. Leuschner, M. Schmidt, D. O. Baguer, and P. Maass, “Lodopab-ct, a benchmark dataset for low-dose computed tomography reconstruction,” Scientific Data, vol. 8, no. 1, p. 109, 2021.
- M. Hssayeni, M. Croock, A. Salman, H. Al-khafaji, Z. Yahya, and B. Ghoraani, “Computed tomography images for intracranial hemorrhage detection and segmentation,” Intracranial Hemorrhage Segmentation Using A Deep Convolutional Model. Data, vol. 5, no. 1, p. 14, 2020.
- N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,” in International Conference on Learning Representations, 2017.
- C. Riquelme, J. Puigcerver, B. Mustafa, M. Neumann, R. Jenatton, A. Susano Pinto, D. Keysers, and N. Houlsby, “Scaling vision with sparse mixture of experts,” Advances in Neural Information Processing Systems, vol. 34, pp. 8583–8595, 2021.
- W. Fedus, J. Dean, and B. Zoph, “A review of sparse expert models in deep learning,” arXiv preprint arXiv:2209.01667, 2022.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- A. P. Jathoul, J. Laufer, O. Ogunlade, B. Treeby, B. Cox, E. Zhang, P. Johnson, A. R. Pizzey, B. Philip, T. Marafioti et al., “Deep in vivo photoacoustic imaging of mammalian tissues using a tyrosinase-based genetic reporter,” Nature Photonics, vol. 9, no. 4, pp. 239–246, 2015.
- J. Yao, L. Wang, J.-M. Yang, K. I. Maslov, T. T. Wong, L. Li, C.-H. Huang, J. Zou, and L. V. Wang, “High-speed label-free functional photoacoustic microscopy of mouse brain in action,” Nature methods, vol. 12, no. 5, pp. 407–410, 2015.
- A. Doerr, “Cryo-electron tomography,” Nature Methods, vol. 14, no. 1, pp. 34–34, 2017.
- A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- AmirEhsan Khorashadizadeh (9 papers)
- Valentin Debarnot (12 papers)
- Tianlin Liu (24 papers)
- Ivan Dokmanić (67 papers)