Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mind the Class Weight Bias: Weighted Maximum Mean Discrepancy for Unsupervised Domain Adaptation (1705.00609v1)

Published 1 May 2017 in cs.CV

Abstract: In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation.

Citations (544)

Summary

  • The paper introduces WMMD, a novel metric that offsets class weight bias in unsupervised domain adaptation.
  • It employs an iterative EM algorithm to estimate pseudo-labels and class-specific weights for aligning source and target distributions.
  • Experiments on multiple datasets confirm WMMD's superior performance over conventional MMD methods in adapting deep learning models.

Weighted Maximum Mean Discrepancy for Unsupervised Domain Adaptation

The paper "Mind the Class Weight Bias: Weighted Maximum Mean Discrepancy for Unsupervised Domain Adaptation," authored by Hongliang Yan et al., addresses a prevalent yet often overlooked issue in domain adaptation: the class weight bias. This bias arises when there are discrepancies in class prior distributions between source and target domains, an obstacle for existing methods that utilize Maximum Mean Discrepancy (MMD) as a measure of domain discrepancy.

Summary

The authors begin by critiquing traditional MMD-based methods for their oversight of class weight bias, which can degrade performance in unsupervised domain adaptation (UDA) tasks. This bias is typically introduced when changes in sample selection criteria or application scenarios alter class distributions across domains. Conventional MMD is inept at addressing this, as it does not consider the weighted differences in class distributions, leading to suboptimal adaptation.

To mitigate this issue, the authors propose a novel adaptation of MMD, termed Weighted Maximum Mean Discrepancy (WMMD). This model introduces class-specific auxiliary weights to account for class prior probabilities, thereby allowing the alignment of class distributions between the source and target domains. The key challenge addressed is the absence of class labels in the target domain, circumvented through an Expectation-Maximization (EM) algorithm. This approach iteratively assigns pseudo-labels to target samples, estimates class-specific weights, and fine-tunes model parameters.

Experimental Results

Empirical evaluations demonstrate that WMMD outperforms traditional MMD-based systems in domain adaptation tasks. The experiments showcase improved performance across various datasets, including Office-10+Caltech-10, ImageCLEF, and digit recognition datasets, and with different neural network architectures such as AlexNet, GoogLeNet, and LeNet.

Notably, the proposed WMMD model significantly reduces the adverse effects of class weight bias. It achieves this by aligning source and target domain class distributions more accurately, thus enabling better generalization of models pre-trained on source domains to target domains.

Implications and Future Work

The introduction of WMMD sets a precedent for addressing class weight bias in domain adaptation. The practical implications of this work include more robust cross-domain AI applications, particularly where labeled data is scarce or unavailable for target domains.

Theoretically, WMMD enhances our understanding of domain discrepancy metrics by highlighting the importance of class distribution alignment. This insight could spur further research into more sophisticated discrepancy measures and their integration with deep learning architectures.

Future work could extend this approach to explore:

  • Application to generative adversarial networks (GANs) for improved image generation tasks under domain adaptation settings.
  • The efficacy of WMMD on non-CNN-based architectures and different data modalities.
  • Incorporation with other divergence measures beyond MMD to further refine domain adaptation techniques.

In conclusion, this paper presents a nuanced enhancement to the domain adaptation landscape by effectively tackling class weight bias, paving the way for more accurate and efficient UDA models.