Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data (2110.03374v6)

Published 7 Oct 2021 in cs.CV

Abstract: Unsupervised domain adaptation aims to align a labeled source domain and an unlabeled target domain, but it requires to access the source data which often raises concerns in data privacy, data portability and data transmission efficiency. We study unsupervised model adaptation (UMA), or called Unsupervised Domain Adaptation without Source Data, an alternative setting that aims to adapt source-trained models towards target distributions without accessing source data. To this end, we design an innovative historical contrastive learning (HCL) technique that exploits historical source hypothesis to make up for the absence of source data in UMA. HCL addresses the UMA challenge from two perspectives. First, it introduces historical contrastive instance discrimination (HCID) that learns from target samples by contrasting their embeddings which are generated by the currently adapted model and the historical models. With the historical models, HCID encourages UMA to learn instance-discriminative target representations while preserving the source hypothesis. Second, it introduces historical contrastive category discrimination (HCCD) that pseudo-labels target samples to learn category-discriminative target representations. Specifically, HCCD re-weights pseudo labels according to their prediction consistency across the current and historical models. Extensive experiments show that HCL outperforms and state-of-the-art methods consistently across a variety of visual tasks and setups.

Citations (203)

Summary

Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data

The paper investigates the domain of unsupervised model adaptation (UMA) by bypassing the need for source data access, which poses privacy and efficiency challenges. Specifically, the paper introduces a novel approach termed Historical Contrastive Learning (HCL) designed for UMA, which exploits historical model checkpoints rather than source data to facilitate adaptation from a source-trained model to an unlabeled target domain.

Key Contributions and Methodology

The primary contribution of this research is the development of HCL, which leverages historical knowledge encapsulated in previous model states to mitigate information loss encountered due to the absence of source data. This methodology comprises two principal components: Historical Contrastive Instance Discrimination (HCID) and Historical Contrastive Category Discrimination (HCCD).

  1. Historical Contrastive Instance Discrimination (HCID):

HCID introduces a contrastive mechanism that operates at the instance level by comparing the embeddings generated by the current model with those from historical models. The method applies a contrastive loss function to encourage the alignment of embeddings while preserving source representations. This approach aims to enforce a discriminative instance-level representation, thus enhancing generalization to the target domain.

  1. Historical Contrastive Category Discrimination (HCCD):

HCCD functions at the category level and employs pseudo-labeling techniques to create category-discriminative representations among target domain samples. It calibrates pseudo-label reliability through consistency checks between current and historical model predictions. This mechanism ensures that the model remains aligned with categorical distinctions, facilitating task-specific adaptation.

Experimental Results

The paper presents extensive evaluations across tasks in semantics segmentation, object detection, and image classification using well-established benchmarks such as GTA5, Cityscapes, Foggy Cityscapes, BDD100k, VisDA17, and Office-31. HCL consistently demonstrated improvement over baseline UMA methods and was competitive with some state-of-the-art UDA approaches, which require access to source data.

For instance, in the segmentation task GTA5 to Cityscapes, HCL increased the mean Intersection-over-Union (mIoU) significantly. Similarly, in object detection and classification tasks, HCL demonstrated performance superior to existing UMA solutions. Furthermore, experiments revealed that HCL synergistically enhances existing UMA methods when integrated, indicating its complementary utility.

Theoretical Implications and Future Directions

The introduction of HCL into UMA paradigms suggests a novel path for domain adaptation methodologies emphasizing memorization of historical representations over reliance on source data. This direction aligns well with privacy-preserving mandates and efficiency constraints inherent in real-world applications.

Moving forward, theoretical examination into optimization stability and convergence properties of the HCL approach could yield deeper insights and refinements. Additionally, extending this approach to other transfer learning contexts, particularly those with partial or open-set configurations, could leverage HCL’s strength in preserving semantic integrity across domain shifts.

In conclusion, the paper's establishment of historical contrastive learning as a viable UMA strategy underscores a critical leap in domain adaptation narrative, setting a foundation for future explorations into adaptive learning environments where data access limitation, compliance, and transmission costs are critical considerations.