Papers
Topics
Authors
Recent
Search
2000 character limit reached

Glimpse: Generalized Locality for Scalable and Robust CT

Published 1 Jan 2024 in cs.CV, cs.LG, and eess.IV | (2401.00816v3)

Abstract: Deep learning has become the state-of-the-art approach to medical tomographic imaging. A common approach is to feed the result of a simple inversion, for example the backprojection, to a multiscale convolutional neural network (CNN) which computes the final reconstruction. Despite good results on in-distribution test data, this often results in overfitting certain large-scale structures and poor generalization on out-of-distribution (OOD) samples. Moreover, the memory and computational complexity of multiscale CNNs scale unfavorably with image resolution, making them impractical for application at realistic clinical resolutions. In this paper, we introduce Glimpse, a local coordinate-based neural network for computed tomography which reconstructs a pixel value by processing only the measurements associated with the neighborhood of the pixel. Glimpse significantly outperforms successful CNNs on OOD samples, while achieving comparable or better performance on in-distribution test data and maintaining a memory footprint almost independent of image resolution; 5GB memory suffices to train on 1024x1024 images which is orders of magnitude less than CNNs. Glimpse is fully differentiable and can be used plug-and-play in arbitrary deep learning architectures, enabling feats such as correcting miscalibrated projection orientations. Our implementation and Google Colab demo can be accessed at https://github.com/swing-research/Glimpse.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. G. Wang, J. C. Ye, and B. De Man, “Deep learning for tomographic image reconstruction,” Nature Machine Intelligence, vol. 2, no. 12, pp. 737–748, 2020.
  2. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18.   Springer, 2015, pp. 234–241.
  3. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509–4522, 2017.
  4. M. T. McCann, K. H. Jin, and M. Unser, “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 85–95, 2017.
  5. N. Davoudi, X. L. Deán-Ben, and D. Razansky, “Deep learning optoacoustic tomography with sparse data,” Nature Machine Intelligence, vol. 1, no. 10, pp. 453–460, 2019.
  6. T. Liu, A. Chaman, D. Belius, and I. Dokmanić, “Learning multiscale convolutional dictionaries for image reconstruction,” IEEE Transactions on Computational Imaging, vol. 8, pp. 425–437, 2022.
  7. J. Adler and O. Öktem, “Solving ill-posed inverse problems using iterative deep neural networks,” Inverse Problems, vol. 33, no. 12, p. 124007, Nov 2017.
  8. J. Adler and O. Öktem, “Learned primal-dual reconstruction,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1322–1332, 2018.
  9. D. Gilton, G. Ongie, and R. Willett, “Neumann networks for linear inverse problems in imaging,” IEEE Transactions on Computational Imaging, vol. 6, pp. 328–343, 2019.
  10. A. K. Maier, C. Syben, B. Stimpel, T. Würfl, M. Hoffmann, F. Schebesch, W. Fu, L. Mill, L. Kling, and S. Christiansen, “Learning with known operators reduces maximum error bounds,” Nature machine intelligence, vol. 1, no. 8, pp. 373–380, 2019.
  11. A. Hauptmann, J. Adler, S. Arridge, and O. Öktem, “Multi-scale learned iterative reconstruction,” IEEE Transactions on Computational Imaging, vol. 6, pp. 843–856, 2020.
  12. Y. B. Sahel, J. P. Bryan, B. Cleary, S. L. Farhi, and Y. C. Eldar, “Deep unrolled recovery in sparse biological imaging,” 2021.
  13. J. Leuschner, M. Schmidt, P. S. Ganguly, V. Andriiashen, S. B. Coban, A. Denker, D. Bauer, A. Hadjifaradji, K. J. Batenburg, P. Maass, and M. van Eijnatten, “Quantitative comparison of deep learning-based image reconstruction methods for low-dose and sparse-angle CT applications,” Journal of Imaging, vol. 7, no. 3, 2021.
  14. L. A. Feldkamp, L. C. Davis, and J. W. Kress, “Practical cone-beam algorithm,” Josa a, vol. 1, no. 6, pp. 612–619, 1984.
  15. H. K. Aggarwal, M. P. Mani, and M. Jacob, “Modl: Model-based deep learning architecture for inverse problems,” IEEE transactions on medical imaging, vol. 38, no. 2, pp. 394–405, 2018.
  16. B. Hamoud, Y. Bahat, and T. Michaeli, “Beyond local processing: Adapting cnns for ct reconstruction,” in European Conference on Computer Vision.   Springer, 2022, pp. 513–526.
  17. A. Graas, S. B. Coban, K. J. Batenburg, and F. Lucka, “Just-in-time deep learning for real-time x-ray computed tomography,” Scientific Reports, vol. 13, no. 1, p. 20070, 2023.
  18. A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm,” Ultrasonic imaging, vol. 6, no. 1, pp. 81–94, 1984.
  19. S. Lunz, A. Hauptmann, T. Tarvainen, C.-B. Schonlieb, and S. Arridge, “On learned operator correction in inverse problems,” SIAM Journal on Imaging Sciences, vol. 14, no. 1, pp. 92–127, 2021.
  20. S. Gupta, K. Kothari, V. Debarnot, and I. Dokmanić, “Differentiable uncalibrated imaging,” IEEE Transactions on Computational Imaging, 2023.
  21. E. Kang, J. Min, and J. C. Ye, “A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction,” Medical physics, vol. 44, no. 10, pp. e360–e375, 2017.
  22. A. Khorashadizadeh, K. Kothari, L. Salsi, A. A. Harandi, M. de Hoop, and I. Dokmanić, “Conditional injective flows for bayesian imaging,” IEEE Transactions on Computational Imaging, vol. 9, pp. 224–237, 2023.
  23. G. H. Golub and C. F. Van Loan, “An analysis of the total least squares problem,” SIAM journal on numerical analysis, vol. 17, no. 6, pp. 883–893, 1980.
  24. I. Markovsky and S. Van Huffel, “Overview of total least-squares methods,” Signal processing, vol. 87, no. 10, pp. 2283–2302, 2007.
  25. S. Gupta and I. Dokmanić, “Total least squares phase retrieval,” IEEE Transactions on Signal Processing, vol. 70, pp. 536–549, 2021.
  26. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  27. I. O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. Steiner, D. Keysers, J. Uszkoreit et al., “Mlp-mixer: An all-mlp architecture for vision,” Advances in neural information processing systems, vol. 34, pp. 24 261–24 272, 2021.
  28. Z. Pan, B. Zhuang, J. Liu, H. He, and J. Cai, “Scalable vision transformers with hierarchical pooling,” in Proceedings of the IEEE/cvf international conference on computer vision, 2021, pp. 377–386.
  29. Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 17 683–17 693.
  30. G. Bachmann, S. Anagnostidis, and T. Hofmann, “Scaling mlps: A tale of inductive bias,” arXiv preprint arXiv:2306.13575, 2023.
  31. F. Altekrüger, A. Denker, P. Hagemann, J. Hertrich, P. Maass, and G. Steidl, “Patchnr: Learning from small data by patch normalizing flow regularization,” arXiv preprint arXiv:2205.12021, 2022.
  32. A. Khorashadizadeh, A. Chaman, V. Debarnot, and I. Dokmanić, “Funknn: Neural interpolation for functional generation,” in ICLR, 2023.
  33. G. Wang, H. Yu, and B. De Man, “An outlook on x-ray ct research and development,” Medical physics, vol. 35, no. 3, pp. 1051–1064, 2008.
  34. L. De Chiffre, S. Carmignato, J.-P. Kruth, R. Schmitt, and A. Weckenmann, “Industrial applications of computed tomography,” CIRP annals, vol. 63, no. 2, pp. 655–677, 2014.
  35. K. Wells and D. Bradley, “A review of x-ray explosives detection techniques for checked baggage,” Applied Radiation and Isotopes, vol. 70, no. 8, pp. 1729–1746, 2012.
  36. A. Hauptmann and J. Poimala, “Model-corrected learned primal-dual models for fast limited-view photoacoustic tomography,” arXiv preprint arXiv:2304.01963, 2023.
  37. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  38. J. Leuschner, M. Schmidt, D. O. Baguer, and P. Maass, “Lodopab-ct, a benchmark dataset for low-dose computed tomography reconstruction,” Scientific Data, vol. 8, no. 1, p. 109, 2021.
  39. M. Hssayeni, M. Croock, A. Salman, H. Al-khafaji, Z. Yahya, and B. Ghoraani, “Computed tomography images for intracranial hemorrhage detection and segmentation,” Intracranial Hemorrhage Segmentation Using A Deep Convolutional Model. Data, vol. 5, no. 1, p. 14, 2020.
  40. N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,” in International Conference on Learning Representations, 2017.
  41. C. Riquelme, J. Puigcerver, B. Mustafa, M. Neumann, R. Jenatton, A. Susano Pinto, D. Keysers, and N. Houlsby, “Scaling vision with sparse mixture of experts,” Advances in Neural Information Processing Systems, vol. 34, pp. 8583–8595, 2021.
  42. W. Fedus, J. Dean, and B. Zoph, “A review of sparse expert models in deep learning,” arXiv preprint arXiv:2209.01667, 2022.
  43. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  44. A. P. Jathoul, J. Laufer, O. Ogunlade, B. Treeby, B. Cox, E. Zhang, P. Johnson, A. R. Pizzey, B. Philip, T. Marafioti et al., “Deep in vivo photoacoustic imaging of mammalian tissues using a tyrosinase-based genetic reporter,” Nature Photonics, vol. 9, no. 4, pp. 239–246, 2015.
  45. J. Yao, L. Wang, J.-M. Yang, K. I. Maslov, T. T. Wong, L. Li, C.-H. Huang, J. Zou, and L. V. Wang, “High-speed label-free functional photoacoustic microscopy of mouse brain in action,” Nature methods, vol. 12, no. 5, pp. 407–410, 2015.
  46. A. Doerr, “Cryo-electron tomography,” Nature Methods, vol. 14, no. 1, pp. 34–34, 2017.
  47. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
  48. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.