Understand links between SSRL and self-supervised imaging losses
Investigate the theoretical and practical connections between self-supervised representation learning methods such as SimCLR, BYOL, DINO, and masked autoencoders, and self-supervised learning for imaging inverse problems that rely on measurement-only losses and known acquisition physics, in order to clarify how invariance and masking principles translate between these domains.
References
Despite these differences, some of the fundamental principles behind the design of pretext tasks, such as invariance to transformations or masking, are also pillars of the self-supervised losses used for imaging inverse problems, and a better understanding of the connections between these two fields remains an open research problem.
— Self-Supervised Learning from Noisy and Incomplete Data
(2601.03244 - Tachella et al., 6 Jan 2026) in Section "What this manuscript is not about" (Self-supervised representation learning), Chapter 1