Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Collaborative Multi-source Domain Adaptation Through Optimal Transport (2404.06599v3)

Published 9 Apr 2024 in cs.LG and cs.AI

Abstract: Multi-source Domain Adaptation (MDA) seeks to adapt models trained on data from multiple labeled source domains to perform effectively on an unlabeled target domain data, assuming access to sources data. To address the challenges of model adaptation and data privacy, we introduce Collaborative MDA Through Optimal Transport (CMDA-OT), a novel framework consisting of two key phases. In the first phase, each source domain is independently adapted to the target domain using optimal transport methods. In the second phase, a centralized collaborative learning architecture is employed, which aggregates the N models from the N sources without accessing their data, thereby safeguarding privacy. During this process, the server leverages a small set of pseudo-labeled samples from the target domain, known as the target validation subset, to refine and guide the adaptation. This dual-phase approach not only improves model performance on the target domain but also addresses vital privacy challenges inherent in domain adaptation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. An Optimal Transport Framework for Collaborative Multi-view Clustering, pages 131–157. Springer International Publishing, Cham.
  2. Multi-view clustering through optimal transport. Aust. J. Intell. Inf. Process. Syst., 15(3):1–9.
  3. Towards federated learning at scale: System design. In Talwalkar, A., Smith, V., and Zaharia, M., editors, Proceedings of Machine Learning and Systems, volume 1, pages 374–388.
  4. Cuturi, M. (2013). Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26:2292–2300.
  5. Hierarchical optimal transport for unsupervised domain adaptation. Machine Learning.
  6. Friedman, M. (1937). The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the american statistical association, 32(200):675–701.
  7. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778.
  8. Herbold, S. (2020). Autorank: A python package for automated ranking of classifiers. Journal of Open Source Software, 5(48):2173.
  9. Discovering latent domains for multisource domain adaptation. pages 702–715.
  10. Cycada: Cycle-consistent adversarial domain adaptation. In ICML.
  11. Co-clustering through optimal transport. In International Conference on Machine Learning, pages 1955–1964. PMLR.
  12. Practical federated gradient boosting decision trees. Proceedings of the AAAI Conference on Artificial Intelligence, 34:4642–4649.
  13. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In ICML.
  14. Combination of transferable classification with multisource domain adaptation based on evidential reasoning. IEEE Transactions on Neural Networks and Learning Systems, 32(5):2015–2029.
  15. Transferable representation learning with deep adaptation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(12):3071–3085.
  16. A theory of multiple-source adaptation with limited target labeled data. In Banerjee, A. and Fukumizu, K., editors, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pages 2332–2340. PMLR.
  17. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Singh, A. and Zhu, J., editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 1273–1282. PMLR.
  18. Most: multi-source domain adaptation via optimal transport for student-teacher learning. In de Campos, C. and Maathuis, M. H., editors, Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, volume 161 of Proceedings of Machine Learning Research, pages 225–235. PMLR.
  19. Stem: An approach to multi-source domain adaptation with guarantees. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 9352–9363.
  20. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1406–1415.
  21. Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6):355–607.
  22. Advances in domain adaptation theory. Elsevier.
  23. Braintorrent: A peer-to-peer environment for decentralized federated learning. ArXiv, abs/1905.06731.
  24. Maximum classifier discrepancy for unsupervised domain adaptation.
  25. Nonparametric density estimation under adversarial losses. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.
  26. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018.
  27. On learning invariant representations for domain adaptation. In Chaudhuri, K. and Salakhutdinov, R., editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 7523–7532. PMLR.
  28. Multi-source distilling domain adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 34:12975–12983.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Omar Ghannou (1 paper)
  2. Younès Bennani (17 papers)