Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation (2002.08546v6)

Published 20 Feb 2020 in cs.CV and cs.LG

Abstract: Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA methods typically require to access the source data when learning to adapt the model, making them risky and inefficient for decentralized private data. This work tackles a practical setting where only a trained source model is available and investigates how we can effectively utilize such a model without source data to solve UDA problems. We propose a simple yet generic representation learning framework, named \emph{Source HypOthesis Transfer} (SHOT). SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and self-supervised pseudo-labeling to implicitly align representations from the target domains to the source hypothesis. To verify its versatility, we evaluate SHOT in a variety of adaptation cases including closed-set, partial-set, and open-set domain adaptation. Experiments indicate that SHOT yields state-of-the-art results among multiple domain adaptation benchmarks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jian Liang (162 papers)
  2. Dapeng Hu (12 papers)
  3. Jiashi Feng (295 papers)
Citations (1,097)

Summary

Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation

The paper "Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation," authored by Jian Liang, Dapeng Hu, and Jiashi Feng, addresses a crucial challenge in Unsupervised Domain Adaptation (UDA). Traditional UDA methods typically require access to source data to train models for target domains. However, this paradigm is inefficient and may breach data privacy, especially in scenarios involving decentralized private data.

Key Contributions

The authors propose a novel and practical UDA setting where only a pre-trained source model is available. Their contribution, Source Hypothesis Transfer (SHOT), is a representation learning framework that leverages the source model without needing access to source data. SHOT uses the source model’s classifier (hypothesis) to guide the learning of a target-specific feature extraction module. This alignment is achieved through two main techniques: information maximization and self-supervised pseudo-labeling.

Methodology

  1. Source Hypothesis Transfer (SHOT): SHOT retains the classifier module from the source and optimizes the feature encoding module for the target domain. This design aims to align the target domain's representations with the source hypothesis. Information Maximization (IM) is used to make the target outputs more certain and diversified, mitigating the risk of trivial solutions with uniform predictions.
  2. Self-Supervised Pseudo-Labeling: To further enhance feature alignment, the authors propose a self-supervised pseudo-labeling mechanism. This method generates label estimates for unlabelled target data by computing class-wise prototypes and refining pseudo labels iteratively. This technique exploits the global structure of the target domain and addresses potential misalignments.

Experimental Evaluation

The authors validate SHOT across multiple UDA tasks such as digit recognition and object recognition. SHOT significantly outperformed existing methods and achieved state-of-the-art results on various benchmarks. For instance:

  • On the medium-sized Office-Home dataset, SHOT improved the average accuracy from 67.6% to 71.8%.
  • On the large-scale VisDA-C dataset, SHOT achieved the highest per-class accuracy, demonstrating its effectiveness in aligning target features with the source hypothesis.

Implications and Future Directions

The implications of this research are both practical and theoretical. Practically, SHOT provides a robust framework for environments with strict data privacy requirements, enabling UDA without source data exposure. Theoretically, this work opens avenues for exploring the potential of model transfer techniques in other forms of transfer learning, such as few-shot and zero-shot learning.

Future research could explore the following:

  • Federated Learning Integration: Extending SHOT’s principles to federated learning scenarios where multiple decentralized models need to collaboratively train without sharing data directly.
  • Robustness and Scalability: Enhancing the robustness of SHOT in exceedingly large-scale datasets and heterogeneous environments where data distributions are significantly diverse.
  • Adaptive Pseudo-Labeling Mechanisms: Refining pseudo-labeling strategies to dynamically adjust to varying degrees of domain shift, further improving the accuracy and reliability of the adapted models.

Conclusion

The paper presents a compelling case for performing UDA with only a pre-trained source model, sidestepping the need for source data access. Through SHOT, the authors demonstrate that it is feasible to leverage information maximization and self-supervised pseudo-labeling to achieve significant performance gains. This represents a meaningful advance in the UDA field, addressing privacy concerns while maintaining competitive adaptation efficacy.