Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Uncertainty-Aware Pseudo-Label Filtering for Source-Free Unsupervised Domain Adaptation (2403.11256v1)

Published 17 Mar 2024 in cs.CV

Abstract: Source-free unsupervised domain adaptation (SFUDA) aims to enable the utilization of a pre-trained source model in an unlabeled target domain without access to source data. Self-training is a way to solve SFUDA, where confident target samples are iteratively selected as pseudo-labeled samples to guide target model learning. However, prior heuristic noisy pseudo-label filtering methods all involve introducing extra models, which are sensitive to model assumptions and may introduce additional errors or mislabeling. In this work, we propose a method called Uncertainty-aware Pseudo-label-filtering Adaptation (UPA) to efficiently address this issue in a coarse-to-fine manner. Specially, we first introduce a sample selection module named Adaptive Pseudo-label Selection (APS), which is responsible for filtering noisy pseudo labels. The APS utilizes a simple sample uncertainty estimation method by aggregating knowledge from neighboring samples and confident samples are selected as clean pseudo-labeled. Additionally, we incorporate Class-Aware Contrastive Learning (CACL) to mitigate the memorization of pseudo-label noise by learning robust pair-wise representation supervised by pseudo labels. Through extensive experiments conducted on three widely used benchmarks, we demonstrate that our proposed method achieves competitive performance on par with state-of-the-art SFUDA methods. Code is available at https://github.com/chenxi52/UPA.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. Multi-Adversarial Domain Adaptation. In AAAI, pages 3934–3941, 2018.
  2. Contrastive Test-Time Adaptation. In CVPR, 2022.
  3. Self-Supervised Noisy Label Learning for Source-Free Unsupervised Domain Adaptation. In IROS, 2022.
  4. Improved Baselines with Momentum Contrastive Learning, 2020.
  5. Denoised Maximum Classifier Discrepancy for Source-Free Unsupervised Domain Adaptation. In AAAI, pages 472–480, 2022.
  6. Domain-adversarial training of neural networks. JMLR, 17:189–209, 2017.
  7. Robust loss functions under label noise for deep neural networks. In AAAI, pages 1919–1925, 2017.
  8. A Kernel Two-Sample Test. Journal of Machine Learning Research, 13:723–773, 2012.
  9. Spherical Space Domain Adaptation with Robust Pseudo-Label Loss. In CVPR, pages 9098–9107, 2020.
  10. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
  11. Domain Transfer Through Deep Activation Matching. In ECCV, 2018.
  12. Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data. In NIPS, pages 1–15, 2021.
  13. Minimum Class Confusion for Versatile Domain Adaptation. In ECCV, 2020.
  14. Contrastive adaptation network for unsupervised domain adaptation. In CVPR, pages 4888–4897, 2019.
  15. Supervised contrastive learning. In NIPS, pages 1–23, 2020.
  16. Confidence Score for Source-Free Unsupervised Domain Adaptation. In ICML, pages 12365–12377, 2022.
  17. Unsupervised domain adaptation based on the predictive uncertainty of models. Neurocomputing, 520:183–193, 2023.
  18. DivideMix: Learning with Noisy Labels as Semi-supervised Learning. In ICLR, pages 1–14, 2020.
  19. Model Adaptation: Unsupervised Domain Adaptation without Source Data. In CVPR, pages 9638–9647, 2020.
  20. Bi-Classifier Determinacy Maximization for Unsupervised Domain Adaptation. In AAAI, pages 8455–8464, 2021.
  21. Selective-Supervised Contrastive Learning with Noisy Labels. In CVPR, pages 316–325, 2022.
  22. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In ICML, pages 5984–5995, 2020.
  23. Conditional adversarial domain adaptation. In NIPS, pages 1640–1650, 2018.
  24. When Does Label Smoothing Help ? In Advances in neural information processing systems, 2019.
  25. FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation. In CVPR, pages 1094–1103, 2021.
  26. Multi-Objective Interpolation Training for Robustness to Label Noise. In CVPR, pages 6602–6611, 2021.
  27. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, pages 2233–2241, 2017.
  28. Moment matching for multi-source domain adaptation. In ICCV, 2019.
  29. VisDA: The Visual Domain Adaptation Challenge, 2017.
  30. Pedro O. Pinheiro. Unsupervised Domain Adaptation with Similarity Learning. In CVPR, pages 8004–8013, 2018.
  31. Source-free Domain Adaptation via Avatar Prototype Generation and Adaptation. In IJCAI, pages 2921–2927, 2021.
  32. Uncertainty-guided Source-free Domain Adaptation. In ECCV, 2022.
  33. Adapting visual category models to new domains.pdf. In ECCV, pages 213–226, 2012.
  34. Semi-supervised domain adaptation via minimax entropy. In ICCV, 2019.
  35. Return of frustratingly easy domain adaptation. In AAAI, pages 2058–2065, 2016.
  36. Deep CORAL: Correlation alignment for deep domain adaptation. In ECCV Workshops, pages 443–450, 2016.
  37. Unsupervised domain adaptation via structurally regularized deep clustering. In CVPR, pages 8722–8732, 2020.
  38. VDM-DA: Virtual Domain Modeling for Source Data-free Domain Adaptation. TCSVT, 8215(c):3749–3760, 2021.
  39. Learning to Adapt Structured Output Space for Semantic Segmentation. In CVPR, 2018.
  40. Laurens van der Maaten and Geoffrey Hinton. Visualizing Data using t-SNE Laurens. Journal of Machine Learning Research, 9(11):187–202, 2008.
  41. Deep hashing network for unsupervised domain adaptation. In CVPR, pages 5385–5394, 2017.
  42. TENT: FULLY TEST-TIME ADAPTATION BY ENTROPY MINIMIZATION. In ICLR, 2021.
  43. Exploring Domain-Invariant Parameters for Source Free Domain Adaptation. In CVPR, pages 7151–7160, 2022.
  44. Cross-domain Contrastive Learning for Unsupervised Domain Adaptation. TMM, pages 1–11, 2022.
  45. Adaptive Adversarial Network for Source-free Domain Adaptation. In ICCV, pages 9010–9019, 2021.
  46. Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation. In Advances in neural information processing systems, pages 1–13, 2021.
  47. MixUp: Beyond empirical risk minimization. In ICLR, pages 1–13, 2018.
  48. Bridging Theory and Algorithm for Domain Adaptation. In ICML, pages 7404–7413., 2019.
  49. MADAN: Multi-source Adversarial Domain Aggregation Network for Domain Adaptation. International Journal of Computer Vision, 129:2399–2424, 2021.
  50. Learning with noisy labels method for unsupervised domain adaptive person re-identification. Neurocomputing, 452:78–88, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xi Chen (1036 papers)
  2. Haosen Yang (23 papers)
  3. Huicong Zhang (5 papers)
  4. Hongxun Yao (30 papers)
  5. Xiatian Zhu (139 papers)
Citations (2)