Stronger and Adaptive Augmentation for Contrastive Learning in Domain Generalization

Develop stronger and more adaptive data augmentation methods specifically tailored for contrastive learning in domain generalization, beyond the standard augmentations evaluated in this work, to improve out-of-domain generalization under distribution shifts.

Background

The paper investigates why self-contrastive learning often fails in domain generalization and proposes Domain-Connecting Contrastive Learning (DCCL), which enhances intra-class connectivity across domains via aggressive augmentation, cross-domain positive pairs, and anchoring to pre-trained models. While stronger augmentation is shown to help, the authors only increase the intensity of color jittering and note that more sophisticated, adaptive augmentation strategies could further enhance performance.

In the limitations, the authors explicitly state that devising stronger and more adaptive augmentation methods for contrastive learning in domain generalization remains unresolved, highlighting a concrete methodological gap that could impact robustness to diverse distribution shifts.

References

In addition, how to develop stronger and more adaptive augmentation methods for contrastive learning on DG is not explored in this paper and remains an open problem.

Connecting Domains and Contrasting Samples: A Ladder for Domain Generalization (2510.16704 - Wei et al., 19 Oct 2025) in Appendix, Section “Discussions & Limitations” (\label{sec:discussions})