Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

De-Confusing Pseudo-Labels in Source-Free Domain Adaptation (2401.01650v3)

Published 3 Jan 2024 in cs.CV

Abstract: Source-free domain adaptation aims to adapt a source-trained model to an unlabeled target domain without access to the source data. It has attracted growing attention in recent years, where existing approaches focus on self-training that usually includes pseudo-labeling techniques. In this paper, we introduce a novel noise-learning approach tailored to address noise distribution in domain adaptation settings and learn to de-confuse the pseudo-labels. More specifically, we learn a noise transition matrix of the pseudo-labels to capture the label corruption of each class and learn the underlying true label distribution. Estimating the noise transition matrix enables a better true class-posterior estimation, resulting in better prediction accuracy. We demonstrate the effectiveness of our approach when combined with several source-free domain adaptation methods: SHOT, SHOT++, and AaD. We obtain state-of-the-art results on three domain adaptation datasets: VisDA, DomainNet, and OfficeHome.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. Self-supervised learning for domain adaptation on point clouds. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 123–133, 2021.
  2. Unsupervised label noise modeling and loss correction. In International conference on machine learning, pages 312–321. PMLR, 2019.
  3. Self-supervised noisy label learning for source-free unsupervised domain adaptation. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 10185–10192. IEEE, 2022.
  4. Fast batch nuclear-norm maximization and minimization for robust domain adaptation. arXiv preprint arXiv:2107.06154, 2021.
  5. Reconciling a centroid-hypothesis conflict in source-free domain adaptation, 2022.
  6. Source-free domain adaptation via distribution estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
  7. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180–1189. PMLR, 2015.
  8. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
  9. Mutual mean-teaching: Pseudo label refinery for unsupervised domain adaptation on person re-identification. In International Conference on Learning Representations, 2019.
  10. Robust loss functions under label noise for deep neural networks. In Proceedings of the AAAI conference on artificial intelligence, 2017.
  11. Training deep neural-networks using a noise adaptation layer. In International Conference on Learning Representations (ICLR), 2017.
  12. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in neural information processing systems, 31, 2018.
  13. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  14. Learning discrete representations via information maximizing self augmented training. In Proceedings of the 34th International Conference on Machine Learning, pages 1558–1567. PMLR, 2017.
  15. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International conference on machine learning, pages 2304–2313. PMLR, 2018.
  16. C-sfda: A curriculum learning aided self-training framework for efficient source free domain adaptation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 24120–24131, 2023.
  17. A broad study of pre-training for domain generalization and adaptation. In The European Conference on Computer Vision (ECCV), 2022.
  18. Conmix for source-free single and multi-target domain adaptation. In 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023.
  19. Concurrent subsidiary supervision for unsupervised source-free domain adaptation. In European Conference on Computer Vision, pages 177–194. Springer, 2022a.
  20. Balancing discriminability and transferability for source-free domain adaptation. In International Conference on Machine Learning, pages 11710–11728. PMLR, 2022b.
  21. Confidence score for source-free unsupervised domain adaptation, 2022.
  22. Rethinking distributional matching based domain adaptation. ArXiv, abs/2006.13352, 2020a.
  23. Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020b.
  24. Provably end-to-end label-noise learning without anchor points. In International Conference on Machine Learning (ICML), 2021.
  25. Disc: Learning from noisy labels via dynamic instance-specific selection and correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 24070–24079, 2023.
  26. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning (ICML), pages 6028–6039, 2020.
  27. Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021. In Press.
  28. A holistic view of label noise transition matrix in deep learning and beyond. In The Eleventh International Conference on Learning Representations, ICLR, 2023.
  29. Guiding pseudo-labels with uncertainty estimation for source-free unsupervised domain adaptation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
  30. Early-learning regularization prevents memorization of noisy labels. Advances in neural information processing systems, 33:20331–20342, 2020.
  31. Classification with noisy labels by importance reweighting. IEEE Transactions on pattern analysis and machine intelligence, 38(3):447–461, 2015.
  32. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021.
  33. Minimal-entropy correlation alignment for unsupervised deep domain adaptation. In International Conference on Learning Representations, 2018.
  34. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1944–1952, 2017.
  35. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.
  36. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1406–1415, 2019.
  37. Regularizing neural networks by penalizing confident output distributions. In International Conference on Learning Representations, Workshop Track Proceedings, 2017.
  38. Source-free domain adaptation via avatar prototype generation and adaptation. In International Joint Conference on Artificial Intelligence, 2021.
  39. Bmd: A general class-balanced multicentric dynamic prototype strategy for source-free domain adaptation. In European conference on computer vision, 2022.
  40. Uncertainty-guided source-free domain adaptation. In European Conference on Computer Vision, pages 537–555. Springer, 2022.
  41. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3723–3732, 2018.
  42. Learning from noisy labels with deep neural networks: A survey. IEEE Transactions on Neural Networks and Learning Systems, 34(11):8135–8153, 2023.
  43. Training convolutional networks with noisy labels. In International conference on learning representations workshop, 2015.
  44. Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision, pages 443–450. Springer, 2016.
  45. Unsupervised domain adaptation through self-supervision. arXiv preprint arXiv:1909.11825, 2019.
  46. Learning from noisy labels by regularized estimation of annotator confusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  47. Learning from noisy labels with decoupled meta label purifier. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19934–19943, 2023.
  48. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014.
  49. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167–7176, 2017.
  50. Deep hashing network for unsupervised domain adaptation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5385–5394. IEEE Computer Society, 2017.
  51. Adaptive adversarial network for source-free domain adaptation. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 8990–8999, 2021.
  52. Robust early-learning: Hindering the memorization of noisy labels. In International conference on learning representations, 2020.
  53. Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2691–2699, 2015.
  54. Self-supervised domain adaptation for computer vision tasks. IEEE Access, 7:156694–156706, 2019.
  55. Exploiting the intrinsic neighborhood structure for source-free domain adaptation. Advances in neural information processing systems, 34:29393–29405, 2021a.
  56. Generalized source-free domain adaptation. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 8958–8967, Los Alamitos, CA, USA, 2021b. IEEE Computer Society.
  57. Attracting and dispersing: A simple approach for source-free domain adaptation, 2022.
  58. When source-free domain adaptation meets learning with noisy labels. In The Eleventh International Conference on Learning Representations, ICLR, 2023.
  59. How does disagreement help generalization against label corruption? In International Conference on Machine Learning, pages 7164–7173. PMLR, 2019.
  60. Rethinking the role of pre-trained networks in source-free domain adaptation. In Int. Conf. Comput. Vis. (ICCV), 2023.
  61. Learning noise transition matrix from only noisy labels via total variation regularization. In International Conference on Machine Learning (ICML), 2021.

Summary

We haven't generated a summary for this paper yet.