Papers
Topics
Authors
Recent
2000 character limit reached

Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound (1907.01743v2)

Published 3 Jul 2019 in eess.IV, cs.AI, cs.CV, and cs.LG

Abstract: Automatic prostate segmentation in transrectal ultrasound (TRUS) images is of essential importance for image-guided prostate interventions and treatment planning. However, developing such automatic solutions remains very challenging due to the missing/ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS, as well as the large variability in prostate shapes. This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our attention module utilizes the attention mechanism to selectively leverage the multilevel features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance. The proposed attention mechanism is a general strategy to aggregate multi-level deep features and has the potential to be used for other medical image segmentation tasks. The code is publicly available at https://github.com/wulalago/DAF3D.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. R. L. Siegel, K. D. Miller, and A. Jemal, “Cancer statistics, 2018,” CA: A Cancer Journal for Clinicians, vol. 68, no. 1, pp. 7–30, 2018.
  2. F. Pinto, A. Totaro, A. Calarco, E. Sacco, A. Volpe, M. Racioppi, A. D’Addessi, G. Gulino, and P. Bassi, “Imaging in prostate cancer diagnosis: present role and future perspectives,” Urologia Internationalis, vol. 86, no. 4, pp. 373–382, 2011.
  3. H. Hricak, P. L. Choyke, S. C. Eberhardt, S. A. Leibel, and P. T. Scardino, “Imaging prostate cancer: a multidisciplinary perspective,” Radiology, vol. 243, no. 1, pp. 28–53, 2007.
  4. Y. Wang, J.-Z. Cheng, D. Ni, M. Lin, J. Qin, X. Luo, M. Xu, X. Xie, and P. A. Heng, “Towards personalized statistical deformable model and hybrid point matching for robust MR-TRUS registration,” IEEE Transactions on Medical Imaging, vol. 35, no. 2, pp. 589–604, 2016.
  5. P. Yan, S. Xu, B. Turkbey, and J. Kruecker, “Discrete deformable model guided by partial active shape model for TRUS image segmentation,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 5, pp. 1158–1166, 2010.
  6. B. J. Davis, E. M. Horwitz, W. R. Lee, J. M. Crook, R. G. Stock, G. S. Merrick, W. M. Butler, P. D. Grimm, N. N. Stone, L. Potters et al., “American brachytherapy society consensus guidelines for transrectal ultrasound-guided permanent prostate brachytherapy,” Brachytherapy, vol. 11, no. 1, pp. 6–19, 2012.
  7. D. K. Bahn, F. Lee, R. Badalament, A. Kumar, J. Greski, and M. Chernick, “Targeted cryoablation of the prostate: 7-year outcomes in the primary treatment of prostate cancer,” Urology, vol. 60, no. 2, pp. 3–11, 2002.
  8. Y. Hu, H. U. Ahmed, Z. Taylor, C. Allen, M. Emberton, D. Hawkes, and D. Barratt, “MR to ultrasound registration for image-guided prostate interventions,” Medical Image Analysis, vol. 16, no. 3, pp. 687–703, 2012.
  9. Y. Wang, Q. Zheng, and P. A. Heng, “Online robust projective dictionary learning: Shape modeling for MR-TRUS registration,” IEEE Transactions on Medical Imaging, vol. 37, no. 4, pp. 1067–1078, 2018.
  10. J. A. Noble and D. Boukerroui, “Ultrasound image segmentation: a survey,” IEEE Transactions on Medical Imaging, vol. 25, no. 8, pp. 987–1010, 2006.
  11. S. Ghose, A. Oliver, R. Martí, X. Lladó, J. C. Vilanova, J. Freixenet, J. Mitra, D. Sidibé, and F. Meriaudeau, “A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images,” Computer Methods and Programs in Biomedicine, vol. 108, no. 1, pp. 262–287, 2012.
  12. H. M. Ladak, F. Mao, Y. Wang, D. B. Downey, D. A. Steinman, and A. Fenster, “Prostate boundary segmentation from 2D ultrasound images,” Medical Physics, vol. 27, no. 8, pp. 1777–1788, 2000.
  13. S. D. Pathak, D. Haynor, and Y. Kim, “Edge-guided boundary delineation in prostate ultrasound images,” IEEE Transactions on Medical Imaging, vol. 19, no. 12, pp. 1211–1219, 2000.
  14. A. Ghanei, H. Soltanian-Zadeh, A. Ratkewicz, and F.-F. Yin, “A three-dimensional deformable model for segmentation of human prostate from ultrasound images,” Medical Physics, vol. 28, no. 10, pp. 2147–2153, 2001.
  15. D. Shen, Y. Zhan, and C. Davatzikos, “Segmentation of prostate boundaries from ultrasound images using statistical shape model,” IEEE Transactions on Medical Imaging, vol. 22, no. 4, pp. 539–551, 2003.
  16. Y. Wang, H. N. Cardinal, D. B. Downey, and A. Fenster, “Semiautomatic three-dimensional segmentation of the prostate using two-dimensional ultrasound images,” Medical Physics, vol. 30, no. 5, pp. 887–897, 2003.
  17. N. Hu, D. B. Downey, A. Fenster, and H. M. Ladak, “Prostate boundary segmentation from 3D ultrasound images,” Medical Physics, vol. 30, no. 7, pp. 1648–1659, 2003.
  18. L. Gong, S. D. Pathak, D. R. Haynor, P. S. Cho, and Y. Kim, “Parametric shape modeling using deformable superellipses for prostate segmentation,” IEEE Transactions on Medical Imaging, vol. 23, no. 3, pp. 340–349, 2004.
  19. S. Badiei, S. E. Salcudean, J. Varah, and W. J. Morris, “Prostate segmentation in 2D ultrasound images using image warping and ellipse fitting,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2006, pp. 17–24.
  20. I. B. Tutar, S. D. Pathak, L. Gong, P. S. Cho, K. Wallner, and Y. Kim, “Semiautomatic 3-D prostate segmentation from TRUS images using spherical harmonics,” IEEE Transactions on Medical Imaging, vol. 25, no. 12, pp. 1645–1654, 2006.
  21. Y. Zhan and D. Shen, “Deformable segmentation of 3-D ultrasound prostate images using statistical texture matching method,” IEEE Transactions on Medical Imaging, vol. 25, no. 3, pp. 256–272, 2006.
  22. P. Yan, S. Xu, B. Turkbey, and J. Kruecker, “Adaptively learning local shape statistics for prostate segmentation in ultrasound,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 3, pp. 633–641, 2011.
  23. S. Ghose, A. Oliver, J. Mitra, R. Martí, X. Lladó, J. Freixenet, D. Sidibé, J. C. Vilanova, J. Comet, and F. Meriaudeau, “A supervised learning framework of statistical shape and probability priors for automatic prostate segmentation in ultrasound images,” Medical Image Analysis, vol. 17, no. 6, pp. 587–600, 2013.
  24. W. Qiu, J. Yuan, E. Ukwatta, Y. Sun, M. Rajchl, and A. Fenster, “Prostate segmentation: an efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images,” IEEE Transactions on Medical Imaging, vol. 33, no. 4, pp. 947–960, 2014.
  25. C. Santiago, J. C. Nascimento, and J. S. Marques, “2D segmentation using a robust active shape model with the EM algorithm,” IEEE Transactions on Image Processing, vol. 24, no. 8, pp. 2592–2601, 2015.
  26. P. Wu, Y. Liu, Y. Li, and B. Liu, “Robust prostate segmentation using intrinsic properties of TRUS images,” IEEE Transactions on Medical Imaging, vol. 34, no. 6, pp. 1321–1335, 2015.
  27. X. Li, C. Li, A. Fedorov, T. Kapur, and X. Yang, “Segmentation of prostate from ultrasound images using level sets on active band and intensity variation across edges,” Medical Physics, vol. 43, no. 6Part1, pp. 3090–3103, 2016.
  28. X. Yang, P. J. Rossi, A. B. Jani, H. Mao, W. J. Curran, and T. Liu, “3D transrectal ultrasound (TRUS) prostate segmentation based on optimal feature learning framework,” in Medical Imaging 2016: Image Processing, vol. 9784.   International Society for Optics and Photonics, 2016, p. 97842F.
  29. L. Zhu, C.-W. Fu, M. S. Brown, and P.-A. Heng, “A non-local low-rank framework for ultrasound speckle reduction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5650–5658.
  30. L. Ma, R. Guo, Z. Tian, and B. Fei, “A random walk-based segmentation framework for 3D ultrasound images of the prostate,” Medical Physics, vol. 44, no. 10, pp. 5128–5142, 2017.
  31. D. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber, “Deep neural networks segment neuronal membranes in electron microscopy images,” in Advances in Neural Information Processing Systems, 2012, pp. 2843–2851.
  32. J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, 2015.
  33. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3431–3440.
  34. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2015, pp. 234–241.
  35. P. Liskowski and K. Krawiec, “Segmenting retinal blood vessels with deep neural networks,” IEEE Transactions on Medical Imaging, vol. 35, no. 11, pp. 2369–2380, 2016.
  36. M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C. Pal, P.-M. Jodoin, and H. Larochelle, “Brain tumor segmentation with deep neural networks,” Medical Image Analysis, vol. 35, pp. 18–31, 2017.
  37. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2018.
  38. Y. Guo, Y. Gao, and D. Shen, “Deformable MR prostate segmentation via deep feature learning and sparse patch matching,” IEEE Transactions on Medical Imaging, vol. 35, no. 4, pp. 1077–1089, 2016.
  39. N. Ghavami, Y. Hu, E. Bonmati, R. Rodell, E. Gibson, C. Moore, and D. Barratt, “Automatic slice segmentation of intraoperative transrectal ultrasound images using convolutional neural networks,” in Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling, vol. 10576.   International Society for Optics and Photonics, 2018, p. 1057603.
  40. N. Ghavami, Y. Hu, E. Bonmati, R. Rodell, E. Gibson, and et al, “Integration of spatial information in convolutional neural networks for automatic segmentation of intraoperative transrectal ultrasound images,” Journal of Medical Imaging, vol. 6, no. 1, p. 011003, 2018.
  41. X. Yang, L. Yu, L. Wu, Y. Wang, D. Ni, J. Qin, and P.-A. Heng, “Fine-grained recurrent neural networks for automatic prostate segmentation in ultrasound images,” in AAAI Conference on Artificial Intelligence, 2017, pp. 1633–1639.
  42. D. Karimi, Q. Zeng, P. Mathur, A. Avinash, S. Mahdavi, I. Spadinger, P. Abolmaesumi, and S. Salcudean, “Accurate and robust segmentation of the clinical target volume for prostate brachytherapy,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2018, pp. 531–539.
  43. E. M. A. Anas, S. Nouranian, S. S. Mahdavi, I. Spadinger, W. J. Morris, S. E. Salcudean, P. Mousavi, and P. Abolmaesumi, “Clinical target-volume delineation in prostate brachytherapy using residual neural networks,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2017, pp. 365–373.
  44. E. M. A. Anas, P. Mousavi, and P. Abolmaesumi, “A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy,” Medical Image Analysis, vol. 48, pp. 107–116, 2018.
  45. Y. Wang, Z. Deng, X. Hu, L. Zhu, X. Yang, X. Xu, P.-A. Heng, and D. Ni, “Deep attentional features for prostate segmentation in ultrasound,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2018, pp. 523–530.
  46. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of the IEEE International on Computer Vision and Pattern Recognition.   IEEE, 2017, pp. 5987–5995.
  47. F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122, 2015.
  48. T.-Y. Lin, P. Dollár, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE International on Computer Vision and Pattern Recognition, 2017, pp. 2117–2125.
  49. S. Xie and Z. Tu, “Holistically-nested edge detection,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1395–1403.
  50. Q. Dou, L. Yu, H. Chen, Y. Jin, X. Yang, J. Qin, and P.-A. Heng, “3D deeply supervised network for automated segmentation of volumetric medical images,” Medical Image Analysis, vol. 41, pp. 40–54, 2017.
  51. L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017.
  52. Y. Wu and K. He, “Group normalization,” in Proceedings of The European Conference on Computer Vision, September 2018.
  53. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132–7141.
  54. H. Zhao, Y. Zhang, S. Liu, J. Shi, C. Change Loy, D. Lin, and J. Jia, “PSANet: Point-wise spatial attention network for scene parsing,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 267–283.
  55. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1026–1034.
  56. F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 Fourth International Conference on 3D Vision (3DV).   IEEE, 2016, pp. 565–571.
  57. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” Computer Science, 2014.
  58. H.-H. Chang, A. H. Zhuang, D. J. Valentino, and W.-C. Chu, “Performance measure characterization for evaluating neuroimage segmentation algorithms,” Neuroimage, vol. 47, no. 1, pp. 122–135, 2009.
  59. G. Litjens, R. Toth, W. van de Ven, C. Hoeks, S. Kerkstra, B. van Ginneken, G. Vincent, G. Guillard, N. Birbeck, J. Zhang et al., “Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge,” Medical Image Analysis, vol. 18, no. 2, pp. 359–373, 2014.
Citations (134)

Summary

  • The paper introduces a custom 3D ResNeXt model integrated with attention modules to enhance prostate segmentation in TRUS images.
  • It leverages multi-scale contextual features using FPN and ASPP to effectively overcome challenges in organ delineation.
  • The method demonstrably improves segmentation accuracy, promising better guidance in image-based prostate cancer treatment planning.

Overview of Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound

The paper "Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound" addresses the complex challenge of automatic prostate segmentation within 3D transrectal ultrasound (TRUS) images. This task is pivotal for image-guided interventions and treatment planning in prostate cancer, yet remains difficult due to the prostate's ambiguous boundaries, variable shapes, and inhomogeneous intensity distribution in TRUS images.

Methodological Advancements

The authors present a novel 3D deep neural network architecture that leverages attention modules to create deep attentive features specifically for prostate segmentation in TRUS volumes. The primary contributions outlined include:

  • Deep Network Architecture with Attention Mechanisms: By using a customized 3D ResNeXt integrated with a Feature Pyramid Network (FPN), this method captures multi-level contextual information. This integration allows the network to balance detail retention and semantic understanding across different convolutional layers, crucial for dealing with the prostate's varying presentation in TRUS images.
  • Attention Modules for Feature Refinement: These modules are designed to refine multi-level features by leveraging complementary data across layers. The attention mechanism applies a learned weighting system to selectively enhance relevant features and suppress noise, resulting in high-quality segmentation performance.
  • Efficient Use of Multi-Scale Contextual Information: The network adopts atrous spatial pyramid pooling (ASPP) to effectively capture multi-scale features, which is critical for accurately representing prostate characteristics within the imaging data.

Experimental Validation

The experimental setup reflects robust validation using TRUS datasets, where the proposed method demonstrates superior performance over several state-of-the-art models in prostate segmentation tasks. The evaluation criteria include Dice similarity, Jaccard Index, Conformity Coefficient, average boundary distance, 95th percentile Hausdorff distance, precision, and recall.

Implications and Future Directions

This research offers significant implications for medical image processing, particularly in contexts where accurate organ delineation is critical. The integration of attention mechanisms within a deep learning framework for TRUS segmentation underscores the potential for refining convolutional neural network outputs to better handle complex imaging challenges.

Furthermore, the proposed method's adaptability suggests potential application beyond prostate segmentation, offering a versatile tool for other medical imaging tasks requiring precise organ delineation amidst noisy background data.

In conclusion, while the paper demonstrates notable advancements in TRUS image segmentation, future research could explore the extension of this method to a broader range of medical imaging modalities, potentially improving diagnostic and treatment planning accuracy across various clinical scenarios. Additionally, testing on a larger, more diverse dataset would enhance the generalizability of the approach and further establish its efficacy.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com