Accelerating Cardiac MRI Reconstruction with CMRatt: An Attention-Driven Approach (2404.06941v1)
Abstract: Cine cardiac magnetic resonance (CMR) imaging is recognised as the benchmark modality for the comprehensive assessment of cardiac function. Nevertheless, the acquisition process of cine CMR is considered as an impediment due to its prolonged scanning time. One commonly used strategy to expedite the acquisition process is through k-space undersampling, though it comes with a drawback of introducing aliasing effects in the reconstructed image. Lately, deep learning-based methods have shown remarkable results over traditional approaches in rapidly achieving precise CMR reconstructed images. This study aims to explore the untapped potential of attention mechanisms incorporated with a deep learning model within the context of the CMR reconstruction problem. We are motivated by the fact that attention has proven beneficial in downstream tasks such as image classification and segmentation, but has not been systematically analysed in the context of CMR reconstruction. Our primary goal is to identify the strengths and potential limitations of attention algorithms when integrated with a convolutional backbone model such as a U-Net. To achieve this, we benchmark different state-of-the-art spatial and channel attention mechanisms on the CMRxRecon dataset and quantitatively evaluate the quality of reconstruction using objective metrics. Furthermore, inspired by the best performing attention mechanism, we propose a new, simple yet effective, attention pipeline specifically optimised for the task of cardiac image reconstruction that outperforms other state-of-the-art attention methods. The layer and model code will be made publicly available.
- E. Larose and et al., “Cardiovascular magnetic resonance for the clinical cardiologist,” Canadian J. of Cardiology, vol. 23, pp. 84B–88B, 2007.
- P. Guo and et al., “ReconFormer: Accelerated MRI reconstruction using recurrent transformer,” IEEE TMI, vol. 43, no. 1, pp. 582–593, 2024.
- G. Yiasemis and et al., “Deep cardiac MRI reconstruction with ADMM,” arXiv preprint arXiv:2310.06628, 2023.
- J. Hu and et al., “Squeeze-and-excitation networks,” in CVPR, 2018, pp. 7132–7141.
- D. Ruan and et al., “Gaussian context transformer,” in CVPR, 2021, pp. 15 129–15 138.
- Z. Yang and et al., “Gated channel transformation for visual recognition,” in CVPR, 2020, pp. 11 794–11 803.
- C. Wang and et al., “CMRxRecon: An open cardiac MRI dataset for the competition of accelerated image reconstruction,” arXiv preprint arXiv:2309.10836, 2023.
- B. Xin and et al., “Fill the k-space and refine the image: Prompting for dynamic and multi-contrast MRI reconstruction,” arXiv preprint arXiv:2309.13839, 2023.
- O. Ronneberger and et al., “U-net: Convolutional networks for biomedical image segmentation,” in MICCAI. Springer, 2015, pp. 234–241.
- J. Dietlmeier and et al., “Cardiac MRI reconstruction from undersampled k-space using double-stream IFFT and a denoising GNA-UNET pipeline,” MICCAI 2023, STACOM CMRxRecon Proceedings, 2023.
- A. Mason and et al., “Comparison of objective image quality metrics to expert radiologists’ scoring of diagnostic quality of mr images,” IEEE Trans. on Med. Imaging, vol. 39, no. 4, pp. 1064–1072, 2019.
- M.-H. Guo and et al., “Attention mechanisms in computer vision: A survey,” Computational visual media, vol. 8, no. 3, pp. 331–368, 2022.
- S. Woo and et al., “CBAM: Convolutional block attention module,” in ECCV, 2018, pp. 3–19.
- J. Park and et al., “BAM: Bottleneck attention module,” arXiv preprint arXiv:1807.06514, 2018.
- Y. Cao and et al., “GCNET: Non-local networks meet squeeze-excitation networks and beyond,” in ICCVW, 2019, pp. 0–0.
- H. Lee and et al., “SRM: A style-based recalibration module for convolutional neural networks,” in ICCV, 2019, pp. 1854–1862.
- S. R. Klomp and et al., “Performance-efficiency comparisons of channel attention modules for resnets,” Neural Processing Letters, pp. 1–17, 2023.
- L. Yang and et al., “SimAM: A simple, parameter-free attention module for convolutional neural networks,” in ICML. PMLR, 2021, pp. 11 863–11 874.
- A. Vaswani and et al., “Attention is all you need,” NIPS, vol. 30, 2017.
- Q. Huang and et al., “MRI reconstruction via cascaded channel-wise attention network,” in ISBI. IEEE, 2019, pp. 1622–1626.
- U. Sara and et al., “Image quality assessment through FSIM, SSIM, MSE and PSNR — a comparative study,” Journal of Computer and Communications, vol. 7, no. 3, pp. 8–18, 2019.
- Y. Jin and et al., “Simplified inception module based Hadamard attention mechanism for medical image classification,” Journal of Computer and Communications, vol. 11, no. 6, pp. 1–18, 2023.
- B. S. Webb and et al., “Early and late mechanisms of surround suppression in striate cortex of macaque,” Journal of Neuroscience, vol. 25, no. 50, pp. 11 666–11 675, 2005.
- Z. Cai and et al., “Repvgg-simam: An efficient bad image classification method based on RepVGG with simple parameter-free attention module,” Applied Sciences, vol. 13, no. 21, p. 11925, 2023.
- J. Shao and et al., “Is normalization indispensable for training deep neural network?” NIPS, vol. 33, pp. 13 434–13 444, 2020.
- Z. Huang and et al., “CCNET: Criss-cross attention for semantic segmentation,” in ICCV, 2019, pp. 603–612.
- F. D. Keles and et al., “On the computational complexity of self-attention,” in International Conference on Algorithmic Learning Theory. PMLR, 2023, pp. 597–619.
- Y. Dai and et al., “Attention as activation,” in ICPR. IEEE, 2021, pp. 9156–9163.
- Anam Hashmi (2 papers)
- Julia Dietlmeier (10 papers)
- Kathleen M. Curran (21 papers)
- Noel E. O'Connor (70 papers)