Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DeepSeeNet: A deep learning model for automated classification of patient-based age-related macular degeneration severity from color fundus photographs (1811.07492v2)

Published 19 Nov 2018 in cs.CV

Abstract: In assessing the severity of age-related macular degeneration (AMD), the Age-Related Eye Disease Study (AREDS) Simplified Severity Scale predicts the risk of progression to late AMD. However, its manual use requires the time-consuming participation of expert practitioners. Although several automated deep learning systems have been developed for classifying color fundus photographs (CFP) of individual eyes by AREDS severity score, none to date has used a patient-based scoring system that uses images from both eyes to assign a severity score. DeepSeeNet, a deep learning model, was developed to classify patients automatically by the AREDS Simplified Severity Scale (score 0-5) using bilateral CFP. DeepSeeNet was trained on 58,402 and tested on 900 images from the longitudinal follow-up of 4549 participants from AREDS. Gold standard labels were obtained using reading center grades. DeepSeeNet simulates the human grading process by first detecting individual AMD risk factors (drusen size, pigmentary abnormalities) for each eye and then calculating a patient-based AMD severity score using the AREDS Simplified Severity Scale. DeepSeeNet performed better on patient-based classification (accuracy = 0.671; kappa = 0.558) than retinal specialists (accuracy = 0.599; kappa = 0.467) with high AUC in the detection of large drusen (0.94), pigmentary abnormalities (0.93), and late AMD (0.97). DeepSeeNet demonstrated high accuracy with increased transparency in the automated assignment of individual patients to AMD risk categories based on the AREDS Simplified Severity Scale. These results highlight the potential of deep learning to assist and enhance clinical decision-making in patients with AMD, such as early AMD detection and risk prediction for developing late AMD. DeepSeeNet is publicly available on https://github.com/ncbi-nlp/DeepSeeNet.

Citations (250)

Summary

  • The paper demonstrates a novel patient-based deep learning approach that classifies AMD severity from bilateral fundus photographs.
  • It leverages three specialized CNN sub-networks based on the Inception-v3 architecture, achieving high accuracy with AUC scores up to 0.97.
  • The findings suggest DeepSeeNet could enhance clinical telemedicine by automating AMD screening and risk stratification in underserved areas.

DeepSeeNet: A Deep Learning Model for AMD Severity Classification

The paper presents DeepSeeNet, an advanced deep learning model developed to automate the classification of age-related macular degeneration (AMD) severity from color fundus photographs (CFP). This paper is particularly notable for its patient-based classification approach, an advancement over previous models focused solely on individual eye assessments. DeepSeeNet, leveraging the AREDS Simplified Severity Scale, assigns a compound severity score by analyzing both eyes of a patient, thereby simulating the clinical process traditionally conducted by ophthalmologists.

Methodology and Dataset

DeepSeeNet was trained using a comprehensive dataset from the Age-Related Eye Disease Study (AREDS) comprising 58,402 images from 4,549 participants, and was tested on a set of 900 images. The model's framework includes three CNN sub-networks: Drusen-Net (D-Net), Pigment-Net (P-Net), and Late AMD-Net (LA-Net), each tasked with detecting specific AMD risk factors. It employs an Inception-v3 architecture, depicting robustness in feature learning and classification capability. Training involved fine-tuning using the Keras API with TensorFlow, optimizing model weights with the Adam algorithm.

Results

DeepSeeNet demonstrated superior performance compared to retinal specialists, achieving an overall accuracy of 67.1% and a Cohen's kappa of 0.558, outperforming human experts in key categories with significant metrics such as AUC of 0.94 for large drusen, 0.93 for pigmentary abnormalities, and 0.97 for late AMD detection. However, despite its general efficacy, the model’s performance in detecting late AMD was slightly inferior to human specialists, indicating room for further optimization, potentially through increased exposure to late AMD cases.

Implications

The results underscore DeepSeeNet's potential application in clinical settings, offering a tool for automated and efficient AMD progression assessment. By simulating the ophthalmologist's intervention and ensuring result transparency through methodological constructs like saliency maps and t-SNE visualization, DeepSeeNet addresses concerns related to interpretability in AI systems. Given the growing global prevalence of AMD and constraints on specialist availability, this model has significant implications for telemedicine, enabling remote screening and prioritization in underserved areas.

Future Directions

The paper suggests several avenues for future exploration, including the integration of additional imaging modalities such as OCT or fundus autofluorescence and expansion of training datasets to enhance model robustness. There is also potential for incorporating multimodal data, such as genetic or demographic information, to elevate predictive accuracy. The viability of DeepSeeNet across diverse demographics and clinical contexts warrants exploration to validate its adaptability and efficacy.

DeepSeeNet exemplifies the intersection of AI and ophthalmology, potentially enhancing patient outcomes through early detection and risk stratification, which are crucial for managing AMD effectively. By making its model and data publicly available, this research fosters an open-source approach to innovation in medical AI applications, setting a benchmark for future studies centered on retinal disease classification methodologies.