Papers
Topics
Authors
Recent
2000 character limit reached

Dynamic Transfer for Multi-Source Domain Adaptation (2103.10583v1)

Published 19 Mar 2021 in cs.CV

Abstract: Recent works of multi-source domain adaptation focus on learning a domain-agnostic model, of which the parameters are static. However, such a static model is difficult to handle conflicts across multiple domains, and suffers from a performance degradation in both source domains and target domain. In this paper, we present dynamic transfer to address domain conflicts, where the model parameters are adapted to samples. The key insight is that adapting model across domains is achieved via adapting model across samples. Thus, it breaks down source domain barriers and turns multi-source domains into a single-source domain. This also simplifies the alignment between source and target domains, as it only requires the target domain to be aligned with any part of the union of source domains. Furthermore, we find dynamic transfer can be simply modeled by aggregating residual matrices and a static convolution matrix. Experimental results show that, without using domain labels, our dynamic transfer outperforms the state-of-the-art method by more than 3% on the large multi-source domain adaptation datasets -- DomainNet. Source code is at https://github.com/liyunsheng13/DRT.

Citations (61)

Summary

Dynamic Transfer for Multi-Source Domain Adaptation

The paper introduces a novel approach to multi-source domain adaptation, specifically targeting the reduction of domain conflicts through a method termed dynamic transfer. Traditional methods in multi-source domain adaptation have relied upon creating static domain-agnostic models to bridge discrepancies between source and target domains. However, such static models often struggle with maintaining performance across varied source domains due to inherent domain conflicts. This paper proposes a dynamic transfer mechanism that allows model parameters to be sample-adaptive, thereby mitigating these conflicts and offering enhanced performance on both source and target datasets.

In the dynamic transfer paradigm, the adaptation of models across domains is achieved by adapting models across samples. This approach transforms the challenge of multi-source domains into a single-source domain problem, simplifying the alignment between source and target data by requiring alignment with only segments of the merged source domains. This is significant as it allows the model to operate effectively without domain labels, surpassing existing state-of-the-art methods in performance with improvements surpassing 3% on DomainNet, a comprehensive multi-source domain adaptation dataset.

Methodology

The proposed method introduces dynamic residual transfer (DRT), which involves modulating model parameters dynamically based on input samples. This approach distinguishes itself from conventional static transfer methods that use fixed parameters. The dynamic aspect is realized by integrating residual matrices with a static convolution matrix, enabling the model to generate different configurations per input, thus effectively reducing domain barriers among source domains.

Dynamic networks are realized using a branching mechanism that computes dynamic coefficients to facilitate subspace routing, incorporating these coefficients to adjust the weights of convolutional kernels dynamically. Subspace routing within DRT is achieved by constructing a linear combination of static matrices and employing dynamic coefficients, allowing adaptable and sample-specific parameterization.

Experimental Results

The methodology is empirically validated on two datasets: Digit-Five and DomainNet. In both cases, dynamic residual transfer exhibits superior performance against various baseline models, including well-established domain adaptation techniques such as DANN, ADDA, and MCD. Specifically, on the Digit-Five dataset, DRT achieved an average gain exceeding that of the state-of-the-art by approximately 1%, underscoring its capability in handling significant domain variance.

The DomainNet experiments reinforce these findings, where DRT surpasses the state-of-the-art CMSS and shows a notable enhancement over traditional methods that rely on explicit domain information. With DRT's self-training variants, further gains are demonstrated, marking a robust performance across multiple target domains, highlighting the generalizability of the approach.

Implications and Future Work

Dynamic transfer represents a paradigm shift in how multi-source domain adaptation is conceptualized, prioritizing dynamic over static solutions. This work not only addresses immediate performance improvements but also suggests that the integration of dynamic adaptations can streamline domain alignment processes. Future research may focus on further optimizing dynamic coefficient generation methods and exploring adaptive architectures that might leverage variable computational budgets, thus expanding the practical applicability of dynamic transfer mechanisms in diverse deployment scenarios.

In conclusion, this paper offers a substantial contribution to the field by refining the approach to multi-source domain adaptation through dynamic residual transfers. The results underscore the benefit of dynamic modeling in navigating complex domain shifts, advocating for additional innovation in network adaptability and efficiency within AI research.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

GitHub