Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Detecting and classifying lesions in mammograms with Deep Learning (1707.08401v3)

Published 26 Jul 2017 in cs.CV

Abstract: In the last two decades Computer Aided Diagnostics (CAD) systems were developed to help radiologists analyze screening mammograms. The benefits of current CAD technologies appear to be contradictory and they should be improved to be ultimately considered useful. Since 2012 deep convolutional neural networks (CNN) have been a tremendous success in image recognition, reaching human performance. These methods have greatly surpassed the traditional approaches, which are similar to currently used CAD solutions. Deep CNN-s have the potential to revolutionize medical image analysis. We propose a CAD system based on one of the most successful object detection frameworks, Faster R-CNN. The system detects and classifies malignant or benign lesions on a mammogram without any human intervention. The proposed method sets the state of the art classification performance on the public INbreast database, AUC = 0.95 . The approach described here has achieved the 2nd place in the Digital Mammography DREAM Challenge with AUC = 0.85 . When used as a detector, the system reaches high sensitivity with very few false positive marks per image on the INbreast dataset. Source code, the trained model and an OsiriX plugin are availaible online at https://github.com/riblidezso/frcnn_cad .

Detection and Classification of Lesions in Mammograms Using Deep Learning

This paper presents a Computer-Aided Diagnostic (CAD) framework employing deep convolutional neural networks for the automatic detection and classification of lesions in mammographic images. The authors utilize the Faster R-CNN model, a prominent object detection framework known for its efficacy in general image analysis, to distinguish between benign and malignant lesions in mammograms, thus enhancing screening mammography—a critical tool for reducing breast cancer mortality.

Key Contributions

The primary contribution of this research is the adaptation of Faster R-CNN to the task of lesion detection in mammograms. Faster R-CNN employs a Region Proposal Network (RPN) to identify and localize lesions, which are then classified by a convolutional neural network as either benign or malignant. This framework effectively bypasses the limitations of traditional CAD systems, which rely heavily on hand-crafted features and often provide inconsistent results.

Notably, the proposed system demonstrates state-of-the-art performance on the INbreast database, achieving an Area Under the Curve (AUC) of 0.95. The system also performed commendably in the Digital Mammography DREAM Challenge, securing second place with an AUC of 0.85. These results underscore the robustness of the proposed method in complex real-world datasets.

Methodology and Datasets

The authors trained the proposed CAD model using annotated mammograms from datasets such as the Digital Database for Screening Mammography (DDSM) and a local dataset from Semmelweis University. Evaluations were conducted on the INbreast dataset, which contains full-field digital mammograms with pixel-level annotations. The choice of these datasets, particularly INbreast, allowed the team to rigorously test the system’s capabilities in detecting and classifying lesions accurately.

Results and Analysis

The evaluation reveals that the system can identify 90% of malignant lesions with a false positive rate of 0.3 per image. The advanced performance metrics achieved suggest that the adaptation of general object detection frameworks to mammography can streamline the CAD process, offering considerable improvements over existing technology. This level of precision and sensitivity is indicative of the potential application of deep learning in enhancing the diagnostic accuracy of breast cancer screenings.

Implications and Future Directions

The implications of this research are significant for both clinical practice and the advancement of AI in medical imaging. By automating lesion detection and classification, this system can function as a perception-enhancer for radiologists, potentially increasing detection rates and reducing diagnostic errors. Furthermore, the reliance on fewer hand-crafted features implies less dependency on expert manual input, which could lower operational costs and time.

Future research could explore the incorporation of larger and more varied datasets to improve detection generalizability across different demographic and medical contexts. Moreover, further integration of AI systems into clinical workflows, ensuring ethical compliance and data privacy, will be vital for wide-scale deployment.

In conclusion, this paper demonstrates the potential of deep learning frameworks to significantly advance breast cancer screening processes. Continued exploration in this domain is likely to yield further enhancements in both theoretical understanding and practical applications in AI-driven medical diagnostics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dezső Ribli (5 papers)
  2. Anna Horváth (4 papers)
  3. Zsuzsa Unger (1 paper)
  4. Péter Pollner (13 papers)
  5. István Csabai (42 papers)
Citations (554)