Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain Adaptive Ensemble Learning (2003.07325v3)

Published 16 Mar 2020 in cs.CV

Abstract: The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems. A DAEL model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain and a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. For unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo-label to supervise the ensemble learning. Extensive experiments on three multi-source UDA datasets and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins. The code is released at \url{https://github.com/KaiyangZhou/Dassl.pytorch}.

Citations (239)

Summary

  • The paper introduces a unified ensemble framework that leverages domain-specific experts and pseudo-target strategies to enhance generalization in UDA and DG settings.
  • It employs a shared CNN backbone with specialized classifiers, using weak and strong augmentations for effective pseudo-labeling and consistency regularization.
  • Experimental results demonstrate significant accuracy improvements on datasets like Digit-5, DomainNet, PACS, and Office-Home, nearly matching oracle performance.

Domain Adaptive Ensemble Learning: A Unified Framework for Improved Generalization

The paper by Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang presents a novel approach for addressing domain shift in machine learning, known as Domain Adaptive Ensemble Learning (DAEL). This framework aims to enhance the generalization of deep neural networks when transferring knowledge from multiple source domains to a target domain. The paper investigates two specific cases under this context: multi-source unsupervised domain adaptation (UDA), where the target data is unlabeled, and domain generalization (DG), where no target data is available during the training phase.

Core Contributions

  1. Unified Framework: DAEL provides a generalizable strategy applicable to both UDA and DG scenarios. It combines ensemble learning principles with domain adaptation to utilize diverse information from multiple sources.
  2. Collaborative Learning of Experts: The framework involves a Convolutional Neural Network (CNN) backbone shared across domains, accompanied by several specialized classifiers, each designated as an expert for a particular source domain. This setup facilitates collaborative learning among domain-specific experts, enabling the network to leverage complementary insights while preserving beneficial domain-specific attributes.
  3. Pseudo-Target-Domain Strategy: For both DG and UDA, each source domain functions alternately as a pseudo-target-domain. In DG, experts from other domains are trained against this pseudo-target to predict its data correctly. For UDA, DAEL generates pseudo labels to guide adaptation without labeled target data.
  4. Extensive Validation: The method demonstrates significant improvements over state-of-the-art benchmarks across three multi-source UDA datasets (Digit-5, DomainNet, miniDomainNet) and two DG datasets (PACS, Office-Home).

Methodology

DAEL employs consistency regularization, initially proposed for semi-supervised learning, to align expert predictions with ensemble outputs. A unique augmentation strategy, involving weak and strong augmentations, is used to enhance the regularization effect. Weak augmentation provides stable pseudo-labels from domain experts, whereas strong augmentations introduce robustness by simulating potential variations in unseen target domains.

Experimental Results

DAEL achieves compelling results:

  • On the Digit-5 dataset, DAEL nearly matches oracle-level performance, boosting accuracy margins significantly on domains like MNIST-M and SVHN.
  • On the more complex, large-scale DomainNet dataset, DAEL exhibits superior accuracy compared to recent approaches, demonstrating the strength of leveraging domain-specific classifiers.
  • The evaluations on PACS and Office-Home for DG confirm DAEL's capacity to generalize across unseen domain settings, with considerable performance gains noted over contemporary methods such as JiGen and CrossGrad.

Future Directions

The paper points towards several potential developments in AI:

  • Adaptive Pseudo-Labeling: Further refinement of pseudo-labeling mechanisms could enhance adaptation efficiency, particularly in highly variable domains.
  • Augmentation Techniques: Exploration of task-specific augmentation strategies may provide broader applicability across different data types and objectives.
  • Scalability and Efficiency: As the framework involves multiple expert models, optimization efforts could focus on reducing computational overhead without compromising performance.

The DAEL framework signifies a substantial step forward in overcoming domain shift challenges, offering both theoretical and practical insights into effective domain adaptation via ensemble learning. Its implementation reveals the potential of domain specialization and collaborative learning to enhance model robustness across variable target settings.