Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound (1907.01743v2)
Abstract: Automatic prostate segmentation in transrectal ultrasound (TRUS) images is of essential importance for image-guided prostate interventions and treatment planning. However, developing such automatic solutions remains very challenging due to the missing/ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS, as well as the large variability in prostate shapes. This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our attention module utilizes the attention mechanism to selectively leverage the multilevel features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance. The proposed attention mechanism is a general strategy to aggregate multi-level deep features and has the potential to be used for other medical image segmentation tasks. The code is publicly available at https://github.com/wulalago/DAF3D.
- R. L. Siegel, K. D. Miller, and A. Jemal, “Cancer statistics, 2018,” CA: A Cancer Journal for Clinicians, vol. 68, no. 1, pp. 7–30, 2018.
- F. Pinto, A. Totaro, A. Calarco, E. Sacco, A. Volpe, M. Racioppi, A. D’Addessi, G. Gulino, and P. Bassi, “Imaging in prostate cancer diagnosis: present role and future perspectives,” Urologia Internationalis, vol. 86, no. 4, pp. 373–382, 2011.
- H. Hricak, P. L. Choyke, S. C. Eberhardt, S. A. Leibel, and P. T. Scardino, “Imaging prostate cancer: a multidisciplinary perspective,” Radiology, vol. 243, no. 1, pp. 28–53, 2007.
- Y. Wang, J.-Z. Cheng, D. Ni, M. Lin, J. Qin, X. Luo, M. Xu, X. Xie, and P. A. Heng, “Towards personalized statistical deformable model and hybrid point matching for robust MR-TRUS registration,” IEEE Transactions on Medical Imaging, vol. 35, no. 2, pp. 589–604, 2016.
- P. Yan, S. Xu, B. Turkbey, and J. Kruecker, “Discrete deformable model guided by partial active shape model for TRUS image segmentation,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 5, pp. 1158–1166, 2010.
- B. J. Davis, E. M. Horwitz, W. R. Lee, J. M. Crook, R. G. Stock, G. S. Merrick, W. M. Butler, P. D. Grimm, N. N. Stone, L. Potters et al., “American brachytherapy society consensus guidelines for transrectal ultrasound-guided permanent prostate brachytherapy,” Brachytherapy, vol. 11, no. 1, pp. 6–19, 2012.
- D. K. Bahn, F. Lee, R. Badalament, A. Kumar, J. Greski, and M. Chernick, “Targeted cryoablation of the prostate: 7-year outcomes in the primary treatment of prostate cancer,” Urology, vol. 60, no. 2, pp. 3–11, 2002.
- Y. Hu, H. U. Ahmed, Z. Taylor, C. Allen, M. Emberton, D. Hawkes, and D. Barratt, “MR to ultrasound registration for image-guided prostate interventions,” Medical Image Analysis, vol. 16, no. 3, pp. 687–703, 2012.
- Y. Wang, Q. Zheng, and P. A. Heng, “Online robust projective dictionary learning: Shape modeling for MR-TRUS registration,” IEEE Transactions on Medical Imaging, vol. 37, no. 4, pp. 1067–1078, 2018.
- J. A. Noble and D. Boukerroui, “Ultrasound image segmentation: a survey,” IEEE Transactions on Medical Imaging, vol. 25, no. 8, pp. 987–1010, 2006.
- S. Ghose, A. Oliver, R. Martí, X. Lladó, J. C. Vilanova, J. Freixenet, J. Mitra, D. Sidibé, and F. Meriaudeau, “A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images,” Computer Methods and Programs in Biomedicine, vol. 108, no. 1, pp. 262–287, 2012.
- H. M. Ladak, F. Mao, Y. Wang, D. B. Downey, D. A. Steinman, and A. Fenster, “Prostate boundary segmentation from 2D ultrasound images,” Medical Physics, vol. 27, no. 8, pp. 1777–1788, 2000.
- S. D. Pathak, D. Haynor, and Y. Kim, “Edge-guided boundary delineation in prostate ultrasound images,” IEEE Transactions on Medical Imaging, vol. 19, no. 12, pp. 1211–1219, 2000.
- A. Ghanei, H. Soltanian-Zadeh, A. Ratkewicz, and F.-F. Yin, “A three-dimensional deformable model for segmentation of human prostate from ultrasound images,” Medical Physics, vol. 28, no. 10, pp. 2147–2153, 2001.
- D. Shen, Y. Zhan, and C. Davatzikos, “Segmentation of prostate boundaries from ultrasound images using statistical shape model,” IEEE Transactions on Medical Imaging, vol. 22, no. 4, pp. 539–551, 2003.
- Y. Wang, H. N. Cardinal, D. B. Downey, and A. Fenster, “Semiautomatic three-dimensional segmentation of the prostate using two-dimensional ultrasound images,” Medical Physics, vol. 30, no. 5, pp. 887–897, 2003.
- N. Hu, D. B. Downey, A. Fenster, and H. M. Ladak, “Prostate boundary segmentation from 3D ultrasound images,” Medical Physics, vol. 30, no. 7, pp. 1648–1659, 2003.
- L. Gong, S. D. Pathak, D. R. Haynor, P. S. Cho, and Y. Kim, “Parametric shape modeling using deformable superellipses for prostate segmentation,” IEEE Transactions on Medical Imaging, vol. 23, no. 3, pp. 340–349, 2004.
- S. Badiei, S. E. Salcudean, J. Varah, and W. J. Morris, “Prostate segmentation in 2D ultrasound images using image warping and ellipse fitting,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2006, pp. 17–24.
- I. B. Tutar, S. D. Pathak, L. Gong, P. S. Cho, K. Wallner, and Y. Kim, “Semiautomatic 3-D prostate segmentation from TRUS images using spherical harmonics,” IEEE Transactions on Medical Imaging, vol. 25, no. 12, pp. 1645–1654, 2006.
- Y. Zhan and D. Shen, “Deformable segmentation of 3-D ultrasound prostate images using statistical texture matching method,” IEEE Transactions on Medical Imaging, vol. 25, no. 3, pp. 256–272, 2006.
- P. Yan, S. Xu, B. Turkbey, and J. Kruecker, “Adaptively learning local shape statistics for prostate segmentation in ultrasound,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 3, pp. 633–641, 2011.
- S. Ghose, A. Oliver, J. Mitra, R. Martí, X. Lladó, J. Freixenet, D. Sidibé, J. C. Vilanova, J. Comet, and F. Meriaudeau, “A supervised learning framework of statistical shape and probability priors for automatic prostate segmentation in ultrasound images,” Medical Image Analysis, vol. 17, no. 6, pp. 587–600, 2013.
- W. Qiu, J. Yuan, E. Ukwatta, Y. Sun, M. Rajchl, and A. Fenster, “Prostate segmentation: an efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images,” IEEE Transactions on Medical Imaging, vol. 33, no. 4, pp. 947–960, 2014.
- C. Santiago, J. C. Nascimento, and J. S. Marques, “2D segmentation using a robust active shape model with the EM algorithm,” IEEE Transactions on Image Processing, vol. 24, no. 8, pp. 2592–2601, 2015.
- P. Wu, Y. Liu, Y. Li, and B. Liu, “Robust prostate segmentation using intrinsic properties of TRUS images,” IEEE Transactions on Medical Imaging, vol. 34, no. 6, pp. 1321–1335, 2015.
- X. Li, C. Li, A. Fedorov, T. Kapur, and X. Yang, “Segmentation of prostate from ultrasound images using level sets on active band and intensity variation across edges,” Medical Physics, vol. 43, no. 6Part1, pp. 3090–3103, 2016.
- X. Yang, P. J. Rossi, A. B. Jani, H. Mao, W. J. Curran, and T. Liu, “3D transrectal ultrasound (TRUS) prostate segmentation based on optimal feature learning framework,” in Medical Imaging 2016: Image Processing, vol. 9784. International Society for Optics and Photonics, 2016, p. 97842F.
- L. Zhu, C.-W. Fu, M. S. Brown, and P.-A. Heng, “A non-local low-rank framework for ultrasound speckle reduction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5650–5658.
- L. Ma, R. Guo, Z. Tian, and B. Fei, “A random walk-based segmentation framework for 3D ultrasound images of the prostate,” Medical Physics, vol. 44, no. 10, pp. 5128–5142, 2017.
- D. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber, “Deep neural networks segment neuronal membranes in electron microscopy images,” in Advances in Neural Information Processing Systems, 2012, pp. 2843–2851.
- J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, 2015.
- J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3431–3440.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.
- P. Liskowski and K. Krawiec, “Segmenting retinal blood vessels with deep neural networks,” IEEE Transactions on Medical Imaging, vol. 35, no. 11, pp. 2369–2380, 2016.
- M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C. Pal, P.-M. Jodoin, and H. Larochelle, “Brain tumor segmentation with deep neural networks,” Medical Image Analysis, vol. 35, pp. 18–31, 2017.
- L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2018.
- Y. Guo, Y. Gao, and D. Shen, “Deformable MR prostate segmentation via deep feature learning and sparse patch matching,” IEEE Transactions on Medical Imaging, vol. 35, no. 4, pp. 1077–1089, 2016.
- N. Ghavami, Y. Hu, E. Bonmati, R. Rodell, E. Gibson, C. Moore, and D. Barratt, “Automatic slice segmentation of intraoperative transrectal ultrasound images using convolutional neural networks,” in Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling, vol. 10576. International Society for Optics and Photonics, 2018, p. 1057603.
- N. Ghavami, Y. Hu, E. Bonmati, R. Rodell, E. Gibson, and et al, “Integration of spatial information in convolutional neural networks for automatic segmentation of intraoperative transrectal ultrasound images,” Journal of Medical Imaging, vol. 6, no. 1, p. 011003, 2018.
- X. Yang, L. Yu, L. Wu, Y. Wang, D. Ni, J. Qin, and P.-A. Heng, “Fine-grained recurrent neural networks for automatic prostate segmentation in ultrasound images,” in AAAI Conference on Artificial Intelligence, 2017, pp. 1633–1639.
- D. Karimi, Q. Zeng, P. Mathur, A. Avinash, S. Mahdavi, I. Spadinger, P. Abolmaesumi, and S. Salcudean, “Accurate and robust segmentation of the clinical target volume for prostate brachytherapy,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2018, pp. 531–539.
- E. M. A. Anas, S. Nouranian, S. S. Mahdavi, I. Spadinger, W. J. Morris, S. E. Salcudean, P. Mousavi, and P. Abolmaesumi, “Clinical target-volume delineation in prostate brachytherapy using residual neural networks,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2017, pp. 365–373.
- E. M. A. Anas, P. Mousavi, and P. Abolmaesumi, “A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy,” Medical Image Analysis, vol. 48, pp. 107–116, 2018.
- Y. Wang, Z. Deng, X. Hu, L. Zhu, X. Yang, X. Xu, P.-A. Heng, and D. Ni, “Deep attentional features for prostate segmentation in ultrasound,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2018, pp. 523–530.
- S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of the IEEE International on Computer Vision and Pattern Recognition. IEEE, 2017, pp. 5987–5995.
- F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122, 2015.
- T.-Y. Lin, P. Dollár, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE International on Computer Vision and Pattern Recognition, 2017, pp. 2117–2125.
- S. Xie and Z. Tu, “Holistically-nested edge detection,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1395–1403.
- Q. Dou, L. Yu, H. Chen, Y. Jin, X. Yang, J. Qin, and P.-A. Heng, “3D deeply supervised network for automated segmentation of volumetric medical images,” Medical Image Analysis, vol. 41, pp. 40–54, 2017.
- L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017.
- Y. Wu and K. He, “Group normalization,” in Proceedings of The European Conference on Computer Vision, September 2018.
- J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132–7141.
- H. Zhao, Y. Zhang, S. Liu, J. Shi, C. Change Loy, D. Lin, and J. Jia, “PSANet: Point-wise spatial attention network for scene parsing,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 267–283.
- K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1026–1034.
- F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 Fourth International Conference on 3D Vision (3DV). IEEE, 2016, pp. 565–571.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” Computer Science, 2014.
- H.-H. Chang, A. H. Zhuang, D. J. Valentino, and W.-C. Chu, “Performance measure characterization for evaluating neuroimage segmentation algorithms,” Neuroimage, vol. 47, no. 1, pp. 122–135, 2009.
- G. Litjens, R. Toth, W. van de Ven, C. Hoeks, S. Kerkstra, B. van Ginneken, G. Vincent, G. Guillard, N. Birbeck, J. Zhang et al., “Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge,” Medical Image Analysis, vol. 18, no. 2, pp. 359–373, 2014.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.