Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generalized Source-free Domain Adaptation (2108.01614v2)

Published 3 Aug 2021 in cs.CV

Abstract: Domain adaptation (DA) aims to transfer the knowledge learned from a source domain to an unlabeled target domain. Some recent works tackle source-free domain adaptation (SFDA) where only a source pre-trained model is available for adaptation to the target domain. However, those methods do not consider keeping source performance which is of high practical value in real world applications. In this paper, we propose a new domain adaptation paradigm called Generalized Source-free Domain Adaptation (G-SFDA), where the learned model needs to perform well on both the target and source domains, with only access to current unlabeled target data during adaptation. First, we propose local structure clustering (LSC), aiming to cluster the target features with its semantically similar neighbors, which successfully adapts the model to the target domain in the absence of source data. Second, we propose sparse domain attention (SDA), it produces a binary domain specific attention to activate different feature channels for different domains, meanwhile the domain attention will be utilized to regularize the gradient during adaptation to keep source information. In the experiments, for target performance our method is on par with or better than existing DA and SFDA methods, specifically it achieves state-of-the-art performance (85.4%) on VisDA, and our method works well for all domains after adapting to single or multiple target domains. Code is available in https://github.com/Albert0147/G-SFDA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shiqi Yang (47 papers)
  2. Yaxing Wang (46 papers)
  3. Joost van de Weijer (133 papers)
  4. Luis Herranz (46 papers)
  5. Shangling Jui (36 papers)
Citations (241)

Summary

Insight and Implications of "Generalized Source-free Domain Adaptation"

The recent work on "Generalized Source-free Domain Adaptation" (G-SFDA) by Yang et al. introduces a novel paradigm in the domain adaptation domain, focusing significantly on the ability of a model to adapt to a target domain without requiring access to source data, while maintaining performance on the source domain. This advancement has marked implications for real-world applications where access to source data could be restricted due to privacy or computational constraints.

Overview and Contributions

G-SFDA begins with an existing framework of source-free domain adaptation (SFDA) where the adaptation process occurs without direct access to source data during the targeting to the new domain. The paper extends this by ensuring that models preserve their capabilities on both source and target domains post-adaptation. To facilitate this, Yang et al. introduce two primary mechanisms: Local Structure Clustering (LSC) and Sparse Domain Attention (SDA).

  1. Local Structure Clustering (LSC): This approach improves the adaptation process by leveraging neighborhood information in the feature space to ensure that semantically similar target data points receive consistent predictive outputs. This method capitalizes on the intrinsic clustering tendency of class-related features, enabling the model to align closely without explicit source data.
  2. Sparse Domain Attention (SDA): This is pivotal in mitigating catastrophic forgetting in SFDA scenarios. By employing domain-specific feature channel activations, SDA allows differential treatment of features for source and target domains. This selective attention mechanism serves a dual purpose: activating relevant features for target adaptation and retaining essential source domain characteristics, thus maintaining source performance.

Experimental Findings

Experimental evaluations reveal that the proposed G-SFDA approach achieves a superior performance balance across both source and target domains compared to existing SFDA methodologies. On benchmark datasets such as VisDA and Office-Home, G-SFDA demonstrates competitive target accuracy (85.4% on VisDA) while simultaneously minimizing degradation in source domain accuracy—a pivotal requirement for a practical domain adaptation solution.

Implications and Future Directions

The implications of this work are significant for deploying machine learning models in environments with stringent data privacy considerations and adaptive real-world applications like autonomous driving and healthcare diagnostics. The ability of G-SFDA to ensure robust performance across disparate data environments while respecting operational constraints exemplifies a critical evolution in domain adaptation.

Further research directions could explore enhancing the scalability of G-SFDA for multiple and simultaneous domain shifts, refining the efficiency of LSC and SDA in high-dimensional spaces, and expanding applicability to semi-supervised and unsupervised settings, thereby broadening the scope of environments it can adeptly handle.

In conclusion, the methodology proposed in this paper dispels the assumption that domain adaptation must rely on thorough access and revisitation of source data, setting a new standard for how algorithms can gracefully transition and amalgamate knowledge across evolving data landscapes.