Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Learning for fully automatic detection, segmentation, and Gleason Grade estimation of prostate cancer in multiparametric Magnetic Resonance Images (2103.12650v3)

Published 23 Mar 2021 in physics.med-ph and cs.CV

Abstract: The emergence of multi-parametric magnetic resonance imaging (mpMRI) has had a profound impact on the diagnosis of prostate cancers (PCa), which is the most prevalent malignancy in males in the western world, enabling a better selection of patients for confirmation biopsy. However, analyzing these images is complex even for experts, hence opening an opportunity for computer-aided diagnosis systems to seize. This paper proposes a fully automatic system based on Deep Learning that takes a prostate mpMRI from a PCa-suspect patient and, by leveraging the Retina U-Net detection framework, locates PCa lesions, segments them, and predicts their most likely Gleason grade group (GGG). It uses 490 mpMRIs for training/validation, and 75 patients for testing from two different datasets: ProstateX and IVO (Valencia Oncology Institute Foundation). In the test set, it achieves an excellent lesion-level AUC/sensitivity/specificity for the GGG$\geq$2 significance criterion of 0.96/1.00/0.79 for the ProstateX dataset, and 0.95/1.00/0.80 for the IVO dataset. Evaluated at a patient level, the results are 0.87/1.00/0.375 in ProstateX, and 0.91/1.00/0.762 in IVO. Furthermore, on the online ProstateX grand challenge, the model obtained an AUC of 0.85 (0.87 when trained only on the ProstateX data, tying up with the original winner of the challenge). For expert comparison, IVO radiologist's PI-RADS 4 sensitivity/specificity were 0.88/0.56 at a lesion level, and 0.85/0.58 at a patient level. Additional subsystems for automatic prostate zonal segmentation and mpMRI non-rigid sequence registration were also employed to produce the final fully automated system. The code for the ProstateX-trained system has been made openly available at https://github.com/OscarPellicer/prostate_lesion_detection. We hope that this will represent a landmark for future research to use, compare and improve upon.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
Citations (67)

Summary

  • The paper introduces a fully automated deep learning system based on Retina U-Net that detects, segments, and assigns Gleason Grades to prostate cancer lesions in mpMRI scans.
  • It demonstrates robust performance with lesion-level AUCs of 0.96 on ProstateX and 0.95 on IVO, maintaining a sensitivity of 1.00.
  • The integrated approach can reduce radiologist workload and improve early detection and treatment planning in clinical oncology.

Deep Learning in Automatic Detection and Grading of Prostate Cancer Using mpMRI

The paper under review presents a sophisticated approach leveraging deep learning techniques for the automatic detection, segmentation, and Gleason Grade estimation of prostate cancer (PCa) from multiparametric Magnetic Resonance Imaging (mpMRI). This research addresses the pressing need for reliable Computer-Aided Diagnosis (CAD) tools, particularly in interpreting mpMRI images for prostate cancer, which remains challenging even for specialists due to the complexity involved and potential variability in human evaluation.

Methodological Framework

The paper introduces a fully automated system utilizing a Deep Learning framework, particularly the Retina U-Net architecture, to process mpMRI data. This architecture combines the detection capabilities of Retina Net and the segmentation precision of U-Net, optimized for the challenges posed by medical imaging. By extending the Feature Pyramid Network (FPN) with segmentation functionality, the model not only detects and segments potential lesions but also assigns Gleason Grades, providing a comprehensive analysis of prostate cancer severity.

The system is trained and validated using two datasets: ProstateX and the Valencia Oncology Institute Foundation (IVO). The former is freely accessible, facilitating the potential for wide reproducibility and enhancement by the scientific community, while the latter provides a more localized dataset. Data preprocessing included essential steps such as intensity normalization, resolution standardization, automated prostate zonal segmentation, and sequence registration to ensure homogeneity across datasets and enrich the data representation.

Results and Analysis

The model's evaluation on the test sets yielded impressive metrics, achieving lesion-level AUC scores of 0.96 for ProstateX and 0.95 for IVO using the GGG≥2 significance threshold. Sensitivity was maintained at 1.00 across both datasets, while specificity varied slightly, indicating robust lesion detection capability. Patient-level analysis showed AUCs of 0.87 and 0.91 for ProstateX and IVO, respectively. An external validation within the ongoing ProstateX challenge demonstrated an AUC of 0.85, aligning with the highest ranks from the original challenge results.

Furthermore, sensitivity and specificity comparison with radiologist-assigned PI-RADS scores illustrated the model's competitive performance, often surpassing expert assessments. Notably, at high sensitivity settings, the model mitigated the common trade-off in specificity effectively when compared to traditional approaches.

Implications and Future Directions

The implications of this research are multifaceted. Practically, an efficient system that reduces the interpretative burden on radiologists could allow for broader deployment of mpMRI in routine screenings, potentially increasing early detection rates and optimizing patient management strategies. Theoretically, this work underscores the significance of integrating detection and grading within a unified deep learning framework, paving the way for more holistic diagnostic tools.

For future research, the open-source nature of the model's codebase allows for continuous improvement, adaptation to additional datasets, and refinement of the underlying AI algorithms. Enhancements may include refining the network's capacity to handle missing data better, exploring the integration of other relevant clinical data streams to augment predictive capability, and further improving lesion segmentation accuracy. Additionally, prospective studies involving direct clinical application could assess the tangible impact on patient outcomes and workflow efficiency.

Overall, this paper contributes substantially to the dialogue on implementing deep learning in medical imaging, demonstrating both the capabilities and complexities of deploying AI-driven systems in clinical oncology. The path forward will likely include incrementally building upon this foundation to move towards more comprehensive, accurate, and accessible cancer diagnostic solutions.