Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain Adaptive Semantic Segmentation (2003.03773v3)

Published 8 Mar 2020 in cs.CV

Abstract: This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation. Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data. Yet the pseudo labels of the target-domain data are usually predicted by the model trained on the source domain. Thus, the generated labels inevitably contain the incorrect prediction due to the discrepancy between the training domain and the test domain, which could be transferred to the final adapted model and largely compromises the training process. To overcome the problem, this paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning for unsupervised semantic segmentation adaptation. Given the input image, the model outputs the semantic segmentation prediction as well as the uncertainty of the prediction. Specifically, we model the uncertainty via the prediction variance and involve the uncertainty into the optimization objective. To verify the effectiveness of the proposed method, we evaluate the proposed method on two prevalent synthetic-to-real semantic segmentation benchmarks, i.e., GTA5 -> Cityscapes and SYNTHIA -> Cityscapes, as well as one cross-city benchmark, i.e., Cityscapes -> Oxford RobotCar. We demonstrate through extensive experiments that the proposed approach (1) dynamically sets different confidence thresholds according to the prediction variance, (2) rectifies the learning from noisy pseudo labels, and (3) achieves significant improvements over the conventional pseudo label learning and yields competitive performance on all three benchmarks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Zhedong Zheng (67 papers)
  2. Yi Yang (856 papers)
Citations (465)

Summary

Overview of "Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain Adaptive Semantic Segmentation"

This paper addresses the notable challenge in unsupervised domain adaptation for semantic segmentation, focusing on the construction of robust models capable of transferring knowledge from a labeled source domain to an unlabeled target domain. The core problem tackled is the inherent noise in pseudo labels generated by models trained on disparate domain distributions, which can significantly degrade final model performance.

Methodology

The authors propose a novel method to improve pseudo label learning by estimating prediction uncertainty. The main idea is to use prediction variance to model uncertainty during training, which in turn refines pseudo label learning. This approach involves:

  1. Prediction Variance: The notion of uncertainty is derived from the variance in predictions between a primary and an auxiliary classifier. The auxiliary classifier, which shares a similar structure with the primary one, leverages intermediate layer activations to provide an alternative prediction pathway.
  2. Variance Regularization: The authors introduce a variance regularization that dynamically sets confidence thresholds based on prediction variance, which helps in mitigating the adverse effects of noisy pseudo labels. This allows the training to adaptively ignore uncertain pseudo labels and focus on learning from those with higher confidence.
  3. Optimization Objective: Incorporating variance into the cross-entropy loss, the adjusted learning objective ensures coherent training by rectifying the influence of noise, without needing extra parameters or complex modules.

Experimental Evaluation

The method was evaluated on three segmentation benchmarks: GTA5 to Cityscapes, SYNTHIA to Cityscapes, and Cityscapes to Oxford RobotCar. Key findings include:

  • Improved Performance: The proposed approach achieved superior or competitive performance compared to several baseline and state-of-the-art methods across all benchmarks, showing marked improvements in mean Intersection over Union (mIoU).
  • Scalability: The method demonstrated robustness against variations in pseudo label quality, effectively handling noise by leveraging uncertainty without pre-defined thresholds.
  • Visualization of Uncertainty: The variance estimation not only enhanced training but provided meaningful visual insights into prediction confidence levels, which could be valuable for further improvements in model interpretability.

Implications and Future Directions

The paper's method underscores the utility of uncertainty estimation in enhancing the robustness of domain-adaptive models. Its success suggests several theoretical and practical implications:

  • Refinement of Domain Adaptation Techniques: This method could influence future strategies in domain adaptation, particularly in how models manage and rectify label noise.
  • Application to Other Tasks: While focused on semantic segmentation, the underlying principles may benefit related domains such as medical imaging, where annotation noise is prevalent.
  • Broader Adoption of Variance-based Learning: The effectiveness of variance regularization may encourage further research into its application in other machine learning paradigms.

The authors suggest further exploration into utilizing uncertainty estimation for related tasks, indicating that this paper lays foundational work for more refined modeling in complex, real-world applications.