Papers
Topics
Authors
Recent
Search
2000 character limit reached

Deep Learning for fully automatic detection, segmentation, and Gleason Grade estimation of prostate cancer in multiparametric Magnetic Resonance Images

Published 23 Mar 2021 in physics.med-ph and cs.CV | (2103.12650v3)

Abstract: The emergence of multi-parametric magnetic resonance imaging (mpMRI) has had a profound impact on the diagnosis of prostate cancers (PCa), which is the most prevalent malignancy in males in the western world, enabling a better selection of patients for confirmation biopsy. However, analyzing these images is complex even for experts, hence opening an opportunity for computer-aided diagnosis systems to seize. This paper proposes a fully automatic system based on Deep Learning that takes a prostate mpMRI from a PCa-suspect patient and, by leveraging the Retina U-Net detection framework, locates PCa lesions, segments them, and predicts their most likely Gleason grade group (GGG). It uses 490 mpMRIs for training/validation, and 75 patients for testing from two different datasets: ProstateX and IVO (Valencia Oncology Institute Foundation). In the test set, it achieves an excellent lesion-level AUC/sensitivity/specificity for the GGG$\geq$2 significance criterion of 0.96/1.00/0.79 for the ProstateX dataset, and 0.95/1.00/0.80 for the IVO dataset. Evaluated at a patient level, the results are 0.87/1.00/0.375 in ProstateX, and 0.91/1.00/0.762 in IVO. Furthermore, on the online ProstateX grand challenge, the model obtained an AUC of 0.85 (0.87 when trained only on the ProstateX data, tying up with the original winner of the challenge). For expert comparison, IVO radiologist's PI-RADS 4 sensitivity/specificity were 0.88/0.56 at a lesion level, and 0.85/0.58 at a patient level. Additional subsystems for automatic prostate zonal segmentation and mpMRI non-rigid sequence registration were also employed to produce the final fully automated system. The code for the ProstateX-trained system has been made openly available at https://github.com/OscarPellicer/prostate_lesion_detection. We hope that this will represent a landmark for future research to use, compare and improve upon.

Citations (67)

Summary

  • The paper presents a comprehensive deep learning framework that automates detection, segmentation, and Gleason Grade estimation for prostate cancer in mpMRI.
  • The methodology employs a multi-channel 3D Retina U-Net architecture with rigorous pre-processing to enhance lesion detection and diagnostic accuracy.
  • Results demonstrate improved sensitivity and specificity over traditional radiologist assessments, indicating strong potential for clinical integration.

Deep Learning for Prostate Cancer Detection in mpMRI

The paper "Deep Learning for fully automatic detection, segmentation, and Gleason Grade estimation of prostate cancer in multiparametric Magnetic Resonance Images" presents a comprehensive deep learning framework designed to enhance prostate cancer diagnosis through multiparametric MRI (mpMRI) analyses. By utilizing the Retina U-Net detection architecture, the authors aim to automate the detection, segmentation, and Gleason grade prediction of prostate cancer lesions, thereby improving diagnostic accuracy and consistency.

Introduction and Background

Prostate cancer remains the most common malignancy among males globally. The advent of mpMRI has significantly reshaped the diagnostic pathway in prostate cancer, offering enhanced glandular imaging that aids in better-targeted biopsies. However, interpreting mpMRI images is complex and often subjective, leading to variability in clinical outcomes. The challenge of manual interpretation opens avenues for computer-aided diagnostic (CAD) systems, which leverage machine learning to analyze imaging data with speed and accuracy.

The study outlines existing paradigms in CAD for prostate cancer detection, noting the shift from earlier statistical models to deep learning frameworks capable of handling complex image data and performing tasks beyond mere classification, such as lesion segmentation.

Methodology

The paper utilizes a dataset composed of mpMRIs from two distinct sources, ProstateX and IVO. It employs a multi-channel approach, integrating sequences such as T2-weighted, diffusion-weighted, and ADC maps to provide a comprehensive input for the model. A critical component of the preparation involves rigorous pre-processing to standardize and enrich the data, including automated zonal segmentation and sequence registration. Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1

Figure 1: Final pre-processed image from a single patient (top: IVO, bottom: ProstateX). Channels (from left to right): T2, b400/b500, b800/b1000/b1400, ADC, KtransK^{trans}.

The proposed system utilizes a 3D Retina U-Net, combining the precision of Retina Net detection capabilities with U-Net's segmentation prowess. It is noteworthy for its efficiency in medical image processing, accommodating the number and scale of detections seamlessly across resolutions. Figure 2

Figure 2

Figure 2

Figure 2: Automatic registration between T2 sequence (left) and ADC map (center: before, right: after) for a sample mpMRI.

Results

The model's efficacy is assessed through lesion-level and patient-level evaluations, utilizing soft thresholds for sensitivity and specificity. It showcases high AUC scores for the significance criterion of Gleason Grade Group ≥2, with the sensitivity and specificity metrics evidencing substantial improvements over traditional radiologist assessments. The model consistently outperformed expert interpretations in lesion detection tasks, demonstrating robustness across both datasets. Figure 3

Figure 3

Figure 3

Figure 3

Figure 3

Figure 3

Figure 3

Figure 3

Figure 3

Figure 3

Figure 3: Output of the model evaluated on two IVO test patients. GGG0 (benign) BBs are not shown and only the highest-scoring BB is shown for highly overlapped detections (IoU >0.25>0.25).

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4

Figure 4: Output of the model evaluated on three ProstateX test patients.

The validation on the ProstateX challenge set confirms its competitive performance, aligning closely with top contenders in the field, which typically involve manual ROI pre-selection instead of fully automated detection.

Discussion

The implications for clinical practice are substantial, with the potential for the system to assist in radiology workflows by reducing misinterpretation risks and expediting the diagnostic process. This research contributes to advancing CAD systems with fully independent lesion detection capabilities, paving the way for more extensive clinical trials.

The paper suggests potential directions for future AI systems in medicine, emphasizing the inclusion of diverse datasets to enhance model generalizability and reliability. The publicly available codebase encourages further development by other researchers aiming to refine or build upon the proposed model.

Conclusion

This paper presents a significant contribution to prostate cancer diagnosis through deep learning, showcasing the practical and theoretical potential of integrating advanced AI techniques in clinical settings. Future research could explore the application beyond oncology, employing similar methodologies for other complex image analysis tasks in medical diagnostics. This research highlights the transformational impact AI can have on medical imaging interpretation, improving accuracy and clinical outcomes in prostate cancer diagnosis.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.