3DTINC: Time-Equivariant Non-Contrastive Learning for Predicting Disease Progression from Longitudinal OCTs (2312.16980v2)
Abstract: Self-supervised learning (SSL) has emerged as a powerful technique for improving the efficiency and effectiveness of deep learning models. Contrastive methods are a prominent family of SSL that extract similar representations of two augmented views of an image while pushing away others in the representation space as negatives. However, the state-of-the-art contrastive methods require large batch sizes and augmentations designed for natural images that are impractical for 3D medical images. To address these limitations, we propose a new longitudinal SSL method, 3DTINC, based on non-contrastive learning. It is designed to learn perturbation-invariant features for 3D optical coherence tomography (OCT) volumes, using augmentations specifically designed for OCT. We introduce a new non-contrastive similarity loss term that learns temporal information implicitly from intra-patient scans acquired at different times. Our experiments show that this temporal information is crucial for predicting progression of retinal diseases, such as age-related macular degeneration (AMD). After pretraining with 3DTINC, we evaluated the learned representations and the prognostic models on two large-scale longitudinal datasets of retinal OCTs where we predict the conversion to wet-AMD within a six months interval. Our results demonstrate that each component of our contributions is crucial for learning meaningful representations useful in predicting disease progression from longitudinal volumetric scans.
- Taha Emre (10 papers)
- Arunava Chakravarty (11 papers)
- Antoine Rivail (6 papers)
- Dmitrii Lachinov (12 papers)
- Oliver Leingang (8 papers)
- Sophie Riedl (10 papers)
- Julia Mai (8 papers)
- Hendrik P. N. Scholl (7 papers)
- Sobha Sivaprasad (12 papers)
- Daniel Rueckert (335 papers)
- Andrew Lotery (5 papers)
- Ursula Schmidt-Erfurth (35 papers)
- Hrvoje Bogunović (45 papers)