Continual Domain Adversarial Adaptation via Double-Head Discriminators (2402.03588v1)
Abstract: Domain adversarial adaptation in a continual setting poses a significant challenge due to the limitations on accessing previous source domain data. Despite extensive research in continual learning, the task of adversarial adaptation cannot be effectively accomplished using only a small number of stored source domain data, which is a standard setting in memory replay approaches. This limitation arises from the erroneous empirical estimation of $\gH$-divergence with few source domain samples. To tackle this problem, we propose a double-head discriminator algorithm, by introducing an addition source-only domain discriminator that are trained solely on source learning phase. We prove that with the introduction of a pre-trained source-only domain discriminator, the empirical estimation error of $\gH$-divergence related adversarial loss is reduced from the source domain side. Further experiments on existing domain adaptation benchmark show that our proposed algorithm achieves more than 2$\%$ improvement on all categories of target domain adaptation task while significantly mitigating the forgetting on source domain.
- A theory of learning from different domains. Machine learning, 79(1):151–175, 2010.
- On learning invariant representations for domain adaptation. In International Conference on Machine Learning, pages 7523–7532. PMLR, 2019.
- Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
- Adversarial multiple source domain adaptation. Advances in neural information processing systems, 31:8559–8570, 2018.
- Conditional adversarial domain adaptation. arXiv preprint arXiv:1705.10667, 2017.
- Bridging theory and algorithm for domain adaptation. In International Conference on Machine Learning, pages 7404–7413. PMLR, 2019.
- Il2m: Class incremental learning with dual memory. In Proceedings of the IEEE/CVF international conference on computer vision, pages 583–592, 2019.
- Maintaining discrimination and fairness in class incremental learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13208–13217, 2020.
- Learning a unified classifier incrementally via rebalancing. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 831–839, 2019.
- End-to-end incremental learning. In Proceedings of the European conference on computer vision (ECCV), pages 233–248, 2018.
- Large scale incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 374–382, 2019.
- Adaptive aggregation networks for class-incremental learning. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 2544–2553, 2021.
- On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486, 2019.
- Hrn: A holistic approach to one class learning. Advances in neural information processing systems, 33:19111–19124, 2020.
- Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3723–3732, 2018.
- Pseudo-labeling and confirmation bias in deep semi-supervised learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2020.
- Meta pseudo labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11557–11568, 2021.
- Dine: Domain adaptation from single and multiple black-box predictors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8003–8013, 2022.
- Source-free domain adaptation via distribution estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7212–7222, 2022.
- Francois Fleuret et al. Uncertainty reduction for model adaptation in semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9613–9623, 2021.
- Leveraging unsupervised data and domain adaptation for deep regression in low-cost sensor calibration. arXiv preprint arXiv:2210.00521, 2022.
- Universal source-free domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4544–4553, 2020.
- Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9641–9650, 2020.
- Vdm-da: Virtual domain modeling for source data-free domain adaptation. IEEE Transactions on Circuits and Systems for Video Technology, 32(6):3749–3760, 2021.
- Exploring domain-invariant parameters for source free domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7151–7160, 2022.
- Adaptive adversarial network for source-free domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9010–9019, 2021.
- Generalized source-free domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8978–8987, 2021.
- Sofa: Source-data-free feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 474–483, 2021.
- Robust continual test-time adaptation: Instance-aware bn and prediction-balanced memory. arXiv preprint arXiv:2208.05117, 2022.
- Efficient test-time model adaptation without forgetting. In International conference on machine learning, pages 16888–16905. PMLR, 2022.
- Fedmm: A communication efficient solver for federated adversarial domain adaptation. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, pages 1808–1816, 2023.
- Federated adversarial domain adaptation. In International Conference on Learning Representations, 2019.
- Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
- Online incremental feature learning with denoising autoencoders. In Artificial intelligence and statistics, pages 1453–1461. PMLR, 2012.
- Adanet: Adaptive structural learning of artificial neural networks. In International conference on machine learning, pages 874–883. PMLR, 2017.
- Lifelong learning with dynamically expandable networks. In International Conference on Learning Representations, 2018.
- Continual segment: towards a single, unified and accessible continual segmentation model of 143 whole-body organs in ct scans. arXiv preprint arXiv:2302.00162, 2023.
- Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017.
- Efficient lifelong learning with a-gem. In International Conference on Learning Representations, 2018.
- Continual learning with tiny episodic memories. In Workshop on Multi-Task and Lifelong Reinforcement Learning, 2019.
- Learning to learn without forgetting by maximizing transfer and minimizing interference. In International Conference on Learning Representations, 2018.
- Remind your neural network to prevent catastrophic forgetting. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VIII 16, pages 466–483. Springer, 2020.
- Gdumb: A simple approach that questions our progress in continual learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pages 524–540. Springer, 2020.
- Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2017.
- Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
- Continual learning through synaptic intelligence. In International conference on machine learning, pages 3987–3995. PMLR, 2017.
- Online continual learning under extreme memory constraints. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII 16, pages 720–735. Springer, 2020.
- Continual adaptation of visual representations via domain randomization and meta-learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4443–4453, 2021.
- Progressive voronoi diagram subdivision enables accurate data-free class-incremental learning. In The Eleventh International Conference on Learning Representations, 2022.
- Self-labelling via simultaneous clustering and representation learning. arXiv preprint arXiv:1911.05371, 2019.
- Unsupervised domain adaptation of black-box source models. arXiv preprint arXiv:2101.02839, 2021.
- Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning, pages 6028–6039. PMLR, 2020.
- Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
- Analyzing the effectiveness and applicability of co-training. In Proceedings of the ninth international conference on Information and knowledge management, pages 86–93, 2000.
- Understanding self-training for gradual domain adaptation. In International Conference on Machine Learning, pages 5468–5479. PMLR, 2020.
- Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180–1189. PMLR, 2015.
- Domain adaptation: Learning bounds and algorithms. arXiv preprint arXiv:0902.3430, 2009.
- Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550–554, 1994.
- Adapting visual category models to new domains. In European conference on computer vision, pages 213–226. Springer, 2010.
- Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5018–5027, 2017.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
- Foundations of machine learning. MIT press, 2018.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.