Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LMT: Longitudinal Mixing Training, a Framework to Predict Disease Progression from a Single Image (2310.10420v1)

Published 16 Oct 2023 in eess.IV, cs.CV, and cs.LG

Abstract: Longitudinal imaging is able to capture both static anatomical structures and dynamic changes in disease progression toward earlier and better patient-specific pathology management. However, conventional approaches rarely take advantage of longitudinal information for detection and prediction purposes, especially for Diabetic Retinopathy (DR). In the past years, Mix-up training and pretext tasks with longitudinal context have effectively enhanced DR classification results and captured disease progression. In the meantime, a novel type of neural network named Neural Ordinary Differential Equation (NODE) has been proposed for solving ordinary differential equations, with a neural network treated as a black box. By definition, NODE is well suited for solving time-related problems. In this paper, we propose to combine these three aspects to detect and predict DR progression. Our framework, Longitudinal Mixing Training (LMT), can be considered both as a regularizer and as a pretext task that encodes the disease progression in the latent space. Additionally, we evaluate the trained model weights on a downstream task with a longitudinal context using standard and longitudinal pretext tasks. We introduce a new way to train time-aware models using $t_{mix}$, a weighted average time between two consecutive examinations. We compare our approach to standard mixing training on DR classification using OPHDIAT a longitudinal retinal Color Fundus Photographs (CFP) dataset. We were able to predict whether an eye would develop a severe DR in the following visit using a single image, with an AUC of 0.798 compared to baseline results of 0.641. Our results indicate that our longitudinal pretext task can learn the progression of DR disease and that introducing $t_{mix}$ augmentation is beneficial for time-aware models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Rachid Zeghlache (10 papers)
  2. Pierre-Henri Conze (38 papers)
  3. Mostafa El Habib Daho (14 papers)
  4. Yihao Li (30 papers)
  5. Ramin Tadayoni (11 papers)
  6. Pascal Massin (5 papers)
  7. Béatrice Cochener (22 papers)
  8. Ikram Brahim (7 papers)
  9. Gwenolé Quellec (34 papers)
  10. Mathieu Lamard (27 papers)
  11. Hugo Le boite (1 paper)
Citations (4)