De-Confusing Pseudo-Labels in Source-Free Domain Adaptation (2401.01650v3)
Abstract: Source-free domain adaptation aims to adapt a source-trained model to an unlabeled target domain without access to the source data. It has attracted growing attention in recent years, where existing approaches focus on self-training that usually includes pseudo-labeling techniques. In this paper, we introduce a novel noise-learning approach tailored to address noise distribution in domain adaptation settings and learn to de-confuse the pseudo-labels. More specifically, we learn a noise transition matrix of the pseudo-labels to capture the label corruption of each class and learn the underlying true label distribution. Estimating the noise transition matrix enables a better true class-posterior estimation, resulting in better prediction accuracy. We demonstrate the effectiveness of our approach when combined with several source-free domain adaptation methods: SHOT, SHOT++, and AaD. We obtain state-of-the-art results on three domain adaptation datasets: VisDA, DomainNet, and OfficeHome.
- Self-supervised learning for domain adaptation on point clouds. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 123–133, 2021.
- Unsupervised label noise modeling and loss correction. In International conference on machine learning, pages 312–321. PMLR, 2019.
- Self-supervised noisy label learning for source-free unsupervised domain adaptation. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 10185–10192. IEEE, 2022.
- Fast batch nuclear-norm maximization and minimization for robust domain adaptation. arXiv preprint arXiv:2107.06154, 2021.
- Reconciling a centroid-hypothesis conflict in source-free domain adaptation, 2022.
- Source-free domain adaptation via distribution estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
- Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180–1189. PMLR, 2015.
- Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
- Mutual mean-teaching: Pseudo label refinery for unsupervised domain adaptation on person re-identification. In International Conference on Learning Representations, 2019.
- Robust loss functions under label noise for deep neural networks. In Proceedings of the AAAI conference on artificial intelligence, 2017.
- Training deep neural-networks using a noise adaptation layer. In International Conference on Learning Representations (ICLR), 2017.
- Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in neural information processing systems, 31, 2018.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Learning discrete representations via information maximizing self augmented training. In Proceedings of the 34th International Conference on Machine Learning, pages 1558–1567. PMLR, 2017.
- Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International conference on machine learning, pages 2304–2313. PMLR, 2018.
- C-sfda: A curriculum learning aided self-training framework for efficient source free domain adaptation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 24120–24131, 2023.
- A broad study of pre-training for domain generalization and adaptation. In The European Conference on Computer Vision (ECCV), 2022.
- Conmix for source-free single and multi-target domain adaptation. In 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023.
- Concurrent subsidiary supervision for unsupervised source-free domain adaptation. In European Conference on Computer Vision, pages 177–194. Springer, 2022a.
- Balancing discriminability and transferability for source-free domain adaptation. In International Conference on Machine Learning, pages 11710–11728. PMLR, 2022b.
- Confidence score for source-free unsupervised domain adaptation, 2022.
- Rethinking distributional matching based domain adaptation. ArXiv, abs/2006.13352, 2020a.
- Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020b.
- Provably end-to-end label-noise learning without anchor points. In International Conference on Machine Learning (ICML), 2021.
- Disc: Learning from noisy labels via dynamic instance-specific selection and correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 24070–24079, 2023.
- Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning (ICML), pages 6028–6039, 2020.
- Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021. In Press.
- A holistic view of label noise transition matrix in deep learning and beyond. In The Eleventh International Conference on Learning Representations, ICLR, 2023.
- Guiding pseudo-labels with uncertainty estimation for source-free unsupervised domain adaptation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
- Early-learning regularization prevents memorization of noisy labels. Advances in neural information processing systems, 33:20331–20342, 2020.
- Classification with noisy labels by importance reweighting. IEEE Transactions on pattern analysis and machine intelligence, 38(3):447–461, 2015.
- Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021.
- Minimal-entropy correlation alignment for unsupervised deep domain adaptation. In International Conference on Learning Representations, 2018.
- Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1944–1952, 2017.
- Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.
- Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1406–1415, 2019.
- Regularizing neural networks by penalizing confident output distributions. In International Conference on Learning Representations, Workshop Track Proceedings, 2017.
- Source-free domain adaptation via avatar prototype generation and adaptation. In International Joint Conference on Artificial Intelligence, 2021.
- Bmd: A general class-balanced multicentric dynamic prototype strategy for source-free domain adaptation. In European conference on computer vision, 2022.
- Uncertainty-guided source-free domain adaptation. In European Conference on Computer Vision, pages 537–555. Springer, 2022.
- Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3723–3732, 2018.
- Learning from noisy labels with deep neural networks: A survey. IEEE Transactions on Neural Networks and Learning Systems, 34(11):8135–8153, 2023.
- Training convolutional networks with noisy labels. In International conference on learning representations workshop, 2015.
- Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision, pages 443–450. Springer, 2016.
- Unsupervised domain adaptation through self-supervision. arXiv preprint arXiv:1909.11825, 2019.
- Learning from noisy labels by regularized estimation of annotator confusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
- Learning from noisy labels with decoupled meta label purifier. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19934–19943, 2023.
- Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014.
- Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167–7176, 2017.
- Deep hashing network for unsupervised domain adaptation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5385–5394. IEEE Computer Society, 2017.
- Adaptive adversarial network for source-free domain adaptation. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 8990–8999, 2021.
- Robust early-learning: Hindering the memorization of noisy labels. In International conference on learning representations, 2020.
- Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2691–2699, 2015.
- Self-supervised domain adaptation for computer vision tasks. IEEE Access, 7:156694–156706, 2019.
- Exploiting the intrinsic neighborhood structure for source-free domain adaptation. Advances in neural information processing systems, 34:29393–29405, 2021a.
- Generalized source-free domain adaptation. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 8958–8967, Los Alamitos, CA, USA, 2021b. IEEE Computer Society.
- Attracting and dispersing: A simple approach for source-free domain adaptation, 2022.
- When source-free domain adaptation meets learning with noisy labels. In The Eleventh International Conference on Learning Representations, ICLR, 2023.
- How does disagreement help generalization against label corruption? In International Conference on Machine Learning, pages 7164–7173. PMLR, 2019.
- Rethinking the role of pre-trained networks in source-free domain adaptation. In Int. Conf. Comput. Vis. (ICCV), 2023.
- Learning noise transition matrix from only noisy labels via total variation regularization. In International Conference on Machine Learning (ICML), 2021.