Echocardiography Segmentation Using Neural ODE-based Diffeomorphic Registration Field
Abstract: Convolutional neural networks (CNNs) have recently proven their excellent ability to segment 2D cardiac ultrasound images. However, the majority of attempts to perform full-sequence segmentation of cardiac ultrasound videos either rely on models trained only on keyframe images or fail to maintain the topology over time. To address these issues, in this work, we consider segmentation of ultrasound video as a registration estimation problem and present a novel method for diffeomorphic image registration using neural ordinary differential equations (Neural ODE). In particular, we consider the registration field vector field between frames as a continuous trajectory ODE. The estimated registration field is then applied to the segmentation mask of the first frame to obtain a segment for the whole cardiac cycle. The proposed method, Echo-ODE, introduces several key improvements compared to the previous state-of-the-art. Firstly, by solving a continuous ODE, the proposed method achieves smoother segmentation, preserving the topology of segmentation maps over the whole sequence (Hausdorff distance: 3.7-4.4). Secondly, it maintains temporal consistency between frames without explicitly optimizing for temporal consistency attributes, achieving temporal consistency in 91% of the videos in the dataset. Lastly, the proposed method is able to maintain the clinical accuracy of the segmentation maps (MAE of the LVEF: 2.7-3.1). The results show that our method surpasses the previous state-of-the-art in multiple aspects, demonstrating the importance of spatial-temporal data processing for the implementation of Neural ODEs in medical imaging applications. These findings open up new research directions for solving echocardiography segmentation tasks.
- M. D. Cheitlin, J. S. Alpert, W. F. Armstrong et al., “ACC/AHA guidelines for the clinical application of echocardiography: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines,” Circulation, vol. 95, no. 6, pp. 1686–1744, 1997.
- G. Varoquaux and V. Cheplygina, “Machine learning for medical imaging: methodological failures and recommendations for the future,” NPJ Digital Medicine, vol. 5, no. 1, p. 48, 2022.
- S. Leclerc, E. Smistad, J. Pedrosa et al., “Deep Learning for Segmentation Using an Open Large-Scale Dataset in 2D Echocardiography”,” IEEE Trans Med Imaging, vol. 38, no. 9, pp. 2198–2210, 2019.
- R. Ge, G. Yang, Y. Chen et al., “V-LVNet: Direct left ventricle multitype indices estimation from 2D echocardiograms of paired apical views with deep neural networks,” Med Image Anal, vol. 58, p. 101554, 2019.
- D. Ouyang, B. He, A. Ghorbani, M. P. Lungren, E. A. Ashley, D. H. Liang, and J. Y. Zou, “EchoNet-Dynamic: a Large New Cardiac Motion Video Data Resource for Medical Machine Learning,” in NeurIPS ML4H Workshop: Vancouver, BC, Canada, 2019.
- O. A. Smiseth, H. Torp, A. Opdahl et al., “Myocardial strain imaging: how useful is it in clinical decision making?” European Heart Journal, vol. 37, no. 15, pp. 1196–1207, 2016.
- M. Cikes and S. D. Solomon, “Beyond ejection fraction: an integrative approach for assessment of cardiac structure and function in heart failure,” European Heart Journal, vol. 37, no. 21, pp. 1642–1650, 2016.
- H. Wei, H. Cao, Y. Cao et al., “Temporal-Consistent Segmentation of Echocardiography with Co-learning from Appearance and Shape,” in Proc. MICCAI, LNCS, 2020, pp. 623–632.
- N. Painchaud, N. Duchateau, O. Bernard, and P.-M. Jodoin, “Echocardiography Segmentation With Enforced Temporal Consistency,” IEEE Trans Med Imaging, vol. 41, no. 10, pp. 2867–2878, 2022.
- R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud, “Neural Ordinary Differential Equations,” in Proc. NeurIPS, vol. 31, 2018.
- O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Proc. MICCAI, LNCS, 2015, pp. 234–241.
- L.-C. Chen, Y. Zhu, G. Papandreou et al., “Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation,” in Proc. ECCV, 2018, pp. 801–818.
- G. Veni, M. Moradi, H. Bulu et al., “Echocardiography segmentation based on a shape-guided deformable model driven by a fully convolutional network prior,” in Proc. ISBI, 2018, pp. 898–902.
- S. Thomas, A. Gilbert, and G. Ben-Yosef, “Light-weight Spatio-Temporal Graphs for Segmentation and Ejection Fraction Prediction in Cardiac Ultrasound,” in Proc. MICCAI, LNCS, 2022, pp. 380–390.
- M. Li, C. Wang, H. Zhang, and G. Yang, “MV-RAN: Multiview recurrent aggregation network for echocardiographic sequences segmentation and full cardiac cycle analysis,” Comput Biol and Med, vol. 120, p. 103728, 2020.
- A. Pati and A. Lerch, “Attribute-based regularization of latent spaces for variational auto-encoders,” Neural Computing and Applications, vol. 33, pp. 4429–4444, 2021.
- A. Sotiras, C. Davatzikos, and N. Paragios, “Deformable Medical Image Registration: A Survey,” IEEE Trans Med Imaging, vol. 32, no. 7, pp. 1153–1190, 2013.
- S. M. Eick, E. A. Enright, S. D. Geiger, and et al., “Associations of maternal stress, prenatal exposure to per-and polyfluoroalkyl substances (pfas), and demographic risk factors with birth outcomes and offspring neurodevelopment: an overview of the echo. ca. il prospective birth cohorts,” International journal of environmental research and public health, vol. 18, no. 2, p. 742, 2021.
- G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V. Dalca, “VoxelMorph: A Learning Framework for Deformable Medical Image Registration,” IEEE Trans Med Imaging, vol. 38, no. 8, pp. 1788–1800, 2019.
- W. Zhu, Y. Huang, D. Xu et al., “Test-Time Training for Deformable Multi-Scale Image Registration,” in Proc. ICRA. IEEE, 2021, pp. 13 618–13 625.
- A. V. Dalca, G. Balakrishnan, J. Guttag, and M. R. Sabuncu, “Unsupervised Learning for Fast Probabilistic Diffeomorphic Registration,” in Proc. MICCAI, LNCS, 2018, pp. 729–738.
- J. Xu, E. Z. Chen, X. Chen, T. Chen, and S. Sun, “Multi-scale Neural ODEs for 3D Medical Image Registration,” in Proc. MICCAI, LNCS, 2021, pp. 213–223.
- A. Joshi and Y. Hong, “Diffeomorphic Image Registration using Lipschitz Continuous Residual Networks,” in Proc. MIDL, 2021.
- F. Rousseau, L. Drumetz, and R. Fablet, “Residual Networks as Flows of Diffeomorphisms,” Journal of Mathematical Imaging and Vision, vol. 62, no. 3, pp. 365–375, 2020.
- J. Ashburner, “A fast diffeomorphic image registration algorithm,” Neuroimage, vol. 38, no. 1, pp. 95–113, 2007.
- A. Iserles, H. Z. Munthe-Kaas, S. P. Nørsett, and A. Zanna, “Lie-group methods,” Acta Numerica, vol. 9, pp. 215–365, 2000.
- A. Vaswani, N. Shazeer, N. Parmar et al., “Attention is All you Need,” in Proc. NeurIPS, vol. 30. Curran Associates, Inc., 2017.
- T. Park, M.-Y. Liu, T.-C. Wang, and J.-Y. Zhu, “Semantic image synthesis with spatially-adaptive normalization,” in Proc. CVPR, 2019.
- Y. Wu, T. Z. Jiahao, J. Wang et al., “NODEO: A Neural Ordinary Differential Equation Based Optimization Framework for Deformable Image Registration,” in Proc. CVPR, 2022, pp. 20 772–20 781.
- A. Paszke, A. Chaurasia, S. Kim et al., “ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation,” arXiv preprint arXiv:1606.02147, 2016.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.