Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DACS: Domain Adaptation via Cross-domain Mixed Sampling (2007.08702v2)

Published 17 Jul 2020 in cs.CV

Abstract: Semantic segmentation models based on convolutional neural networks have recently displayed remarkable performance for a multitude of applications. However, these models typically do not generalize well when applied on new domains, especially when going from synthetic to real data. In this paper we address the problem of unsupervised domain adaptation (UDA), which attempts to train on labelled data from one domain (source domain), and simultaneously learn from unlabelled data in the domain of interest (target domain). Existing methods have seen success by training on pseudo-labels for these unlabelled images. Multiple techniques have been proposed to mitigate low-quality pseudo-labels arising from the domain shift, with varying degrees of success. We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes images from the two domains along with the corresponding labels and pseudo-labels. These mixed samples are then trained on, in addition to the labelled data itself. We demonstrate the effectiveness of our solution by achieving state-of-the-art results for GTA5 to Cityscapes, a common synthetic-to-real semantic segmentation benchmark for UDA.

Overview of DACS: Domain Adaptation via Cross-domain Mixed Sampling

The paper "DACS: Domain Adaptation via Cross-domain Mixed Sampling" presents a novel approach to tackle unsupervised domain adaptation (UDA) for semantic segmentation using deep learning. UDA aims to leverage labeled data from a known source domain to improve the learning of models on an unlabeled target domain, which can be challenging especially when transitioning from synthetic to real data environments. The crux of the issue often lies in the domain shift that leads to degraded performance when existing models are applied directly to the new domain. This work introduces DACS to address these challenges effectively.

Methodology and Key Contributions

The core innovation of this paper is the Domain Adaptation via Cross-domain Mixed Sampling (DACS) technique. Traditional semantic segmentation models struggle in UDA due to poor generalization across different domains. Pseudo-labeling is a well-known strategy in UDA, focusing on training with predictions (pseudo-labels) from unlabelled target data. However, this can lead to suboptimal solutions because pseudo-labels may not be reliable due to the domain shift.

DACS sidesteps the potential pitfalls of poor pseudo-label quality through cross-domain mixed sampling. This involves generating mixed images by combining parts of images from the source and target domains, along with their respective labels and pseudo-labels. The technique is inspired by augmentation methods used in semi-supervised learning (SSL) that create robust learning environments by dynamically altering training samples. Specifically, DACS utilizes a sophisticated mixing strategy called ClassMix, where class-specific regions are swapped between domains to create new training samples.

Key contributions of the paper can be outlined as:

  1. Algorithm Introduction: DACS introduces a unique method for creating augmented samples by cross-domain mixing, allowing model training on highly perturbed images while ensuring that boundary information from both domains is preserved.
  2. Solution to Class Conflation: The method addresses class conflation—an inherent issue with naive mixing where the model fails to distinguish between classes—by integrating reliable class boundaries from source labels into target-domain pseudo-labels. This integration serves as entropy injection, preserving detailed boundary information.
  3. State-of-the-art Results: DACS achieves superior results on the GTA5 to Cityscapes UDA benchmarks, demonstrating statistically significant improvements in the mean Intersection over Union (mIoU) metric compared to previous methods.

Experimental Evaluation and Results

The paper reports strong numerical results, accomplishing a new state-of-the-art performance on a synthetic-to-real domain adaptation benchmark, specifically the GTA5 to Cityscapes task. The implementation leverages the DeepLab-v2 framework with a ResNet101 backbone, showcasing a meticulous approach to achieving high segmentation accuracy by employing DACS. The adaptation not only significantly improves the mIoU over the baseline but outperforms other methods notably in specific challenging classes that are traditionally difficult in domain-adapted scenarios.

Implications and Future Directions

The proposed DACS approach not only forwards the capabilities in UDA for semantic segmentation but also poses intriguing possibilities for future research. The concept of cross-domain sample augmentation could be further explored in other computer vision and machine learning fields where domain generalization is critical, such as medical imaging and remote sensing. Furthermore, enhancing the mixing strategies to cater to extreme domain variations or integration with adversarial training methods could provide additional performance benefits.

The approach also suggests further investigation into adaptive techniques that intelligently decide on mixing proportions or features based on dynamic analysis of current model performance. This adaptive augmentation could be particularly useful in scenarios with rapidly evolving data distributions, such as autonomous driving environments.

Overall, DACS marks a significant methodological contribution to domain adaptation research, providing a meaningful step towards adaptable AI systems capable of seamless domain transition without explicit annotations in target domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Wilhelm Tranheden (2 papers)
  2. Viktor Olsson (3 papers)
  3. Juliano Pinto (7 papers)
  4. Lennart Svensson (81 papers)
Citations (308)