Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gradual Domain Adaptation via Normalizing Flows (2206.11492v4)

Published 23 Jun 2022 in stat.ML and cs.LG

Abstract: Standard domain adaptation methods do not work well when a large gap exists between the source and target domains. Gradual domain adaptation is one of the approaches used to address the problem. It involves leveraging the intermediate domain, which gradually shifts from the source domain to the target domain. In previous work, it is assumed that the number of intermediate domains is large and the distance between adjacent domains is small; hence, the gradual domain adaptation algorithm, involving self-training with unlabeled datasets, is applicable. In practice, however, gradual self-training will fail because the number of intermediate domains is limited and the distance between adjacent domains is large. We propose the use of normalizing flows to deal with this problem while maintaining the framework of unsupervised domain adaptation. The proposed method learns a transformation from the distribution of the target domain to the Gaussian mixture distribution via the source domain. We evaluate our proposed method by experiments using real-world datasets and confirm that it mitigates the above-explained problem and improves the classification performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (76)
  1. Styleflow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows. ACM Transactions on Graphics (ToG), 40(3):1–21.
  2. Gradual domain adaptation in the wild: When intermediate distributions are absent. arXiv preprint arXiv:2106.06080.
  3. Mapflow: latent transition via normalizing flow for unsupervised domain adaptation. Machine Learning, pages 1–22.
  4. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19:137.
  5. Matching normalizing flows and probability paths on manifolds. In International Conference on Machine Learning, pages 1749–1763. PMLR.
  6. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, page 41–48, New York, NY, USA. Association for Computing Machinery.
  7. Flows for simultaneous manifold learning and density estimation. Advances in Neural Information Processing Systems, 33:442–453.
  8. Rectangular flows for manifold learning. Advances in Neural Information Processing Systems, 34:30228–30241.
  9. Gradual domain adaptation without indexed intermediate domains. Advances in Neural Information Processing Systems, 34.
  10. Neural ordinary differential equations. Advances in neural information processing systems, 31.
  11. Visual domain adaptation by consensus-based transfer to intermediate domain. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 10655–10662.
  12. Learning bounds for importance weighting. In Nips, volume 10, pages 442–450. Citeseer.
  13. Gradually vanishing bridge for adversarial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12455–12464.
  14. Idm: An intermediate domain module for domain adaptive person re-id. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11864–11874.
  15. Cdcgen: Cross-domain conditional generation via normalizing flows and adversarial training. arXiv preprint arXiv:2108.11368.
  16. Density estimation using real NVP. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
  17. Algorithms and theory for supervised gradual domain adaptation. Transactions on Machine Learning Research.
  18. Efficient k-nearest neighbor graph construction for generic similarity measures. In Proceedings of the 20th international conference on World wide web, pages 577–586.
  19. Molecular similarity-based predictions of the tox21 screening outcome. Frontiers in Environmental science, 3:54.
  20. Gradual domain adaptation for segmenting whole slide images showing pathological variability. In International Conference on Image and Signal Processing, pages 461–469. Springer.
  21. A century of portraits: A visual historical record of american high school yearbooks. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 1–7.
  22. Dlow: Domain flow for adaptation and generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2477–2486.
  23. A new class of random vector entropy estimators and its applications in testing statistical hypotheses. Journal of Nonparametric Statistics, 17(3):277–297.
  24. Ffjord: Free-form continuous dynamics for scalable reversible generative models. In International Conference on Learning Representations.
  25. Alignflow: Cycle consistent learning from multiple domains via normalizing flows. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 4028–4035.
  26. Gradual domain adaptation: Theory and algorithms. arXiv preprint arXiv:2310.13852.
  27. Flow++: Improving flow-based generative models with variational dequantization and architecture design. In International Conference on Machine Learning, pages 2722–2730. PMLR.
  28. Denoising normalizing flow. Advances in Neural Information Processing Systems, 34:9099–9111.
  29. Progressive domain adaptation for object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 749–757.
  30. Accelerating continuous normalizing flow with trajectory polynomial regularization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 7832–7839.
  31. Curriculum reinforcement learning using optimal transport via gradual domain adaptation. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K., editors, Advances in Neural Information Processing Systems. https://openreview.net/forum?id=_cFdPHRLuJ.
  32. Ageflow: Conditional age progression and regression with normalizing flows. In Zhou, Z.-H., editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 743–750. International Joint Conferences on Artificial Intelligence Organization. Main Track.
  33. Semi-supervised learning with normalizing flows. In International Conference on Machine Learning, pages 4615–4630. PMLR.
  34. Density estimation on smooth manifolds with normalizing flows. arXiv preprint arXiv:2106.03500.
  35. Shift15m: Multiobjective large-scale fashion dataset with distributional shifts. arXiv preprint arXiv:2108.12992.
  36. Glow: Generative flow with invertible 1x1 convolutions. Advances in neural information processing systems, 31.
  37. Why normalizing flows fail to detect out-of-distribution data. Advances in neural information processing systems, 33:20578–20589.
  38. Wilds: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning, pages 5637–5664. PMLR.
  39. The expressive power of a class of normalizing flow models. In International conference on artificial intelligence and statistics, pages 3599–3609. PMLR.
  40. Sample estimate of the entropy of a random vector. Problems of information transmission, 23(2):95 – 101.
  41. Understanding self-training for gradual domain adaptation. In International Conference on Machine Learning, pages 5468–5479. PMLR.
  42. Molgrow: A graph normalizing flow for hierarchical molecular generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 8226–34.
  43. Learning to adapt to evolving domains. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H., editors, Advances in Neural Information Processing Systems, volume 33, pages 22338–22348. Curran Associates, Inc.
  44. Structured output learning with conditional generative flows. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 5005–5012.
  45. Latent normalizing flows for many-to-many cross-domain mappings. In International Conference on Learning Representations.
  46. Domain adaptation: Learning bounds and algorithms. In 22nd Conference on Learning Theory, COLT 2009.
  47. Riemannian continuous normalizing flows. Advances in Neural Information Processing Systems, 33:2503–2515.
  48. Umap: Uniform manifold approximation and projection. The Journal of Open Source Software, 3(29):861.
  49. KL guided domain adaptation. In International Conference on Learning Representations.
  50. Ot-flow: Fast and accurate continuous normalizing flows via optimal transport. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35.
  51. Recent progress on generative adversarial networks (gans): A survey. IEEE Access, 7:36322–36333.
  52. Normalizing flows for probabilistic modeling and inference. Journal of Machine Learning Research, 22(57):1–64.
  53. Pytorch: An imperative style, high-performance deep learning library. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R., editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc.
  54. C-flow: Conditional generative flow models for images and 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7949–7958.
  55. Advances in domain adaptation theory. Elsevier.
  56. Variational inference with normalizing flows. In International conference on machine learning, pages 1530–1538. PMLR.
  57. Tractable density estimation on learned manifolds with conformal embedding flows. Advances in Neural Information Processing Systems, 34:26635–26648.
  58. Moser flow: Divergence-based generative modeling on manifolds. Advances in Neural Information Processing Systems, 34:17669–17680.
  59. Cost-effective framework for gradual domain adaptation with multifidelity. Neural Networks, 164:731–741.
  60. Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244.
  61. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations.
  62. Rxrx1: An image set for cellular morphological variation across many experimental batches. In International Conference on Learning Representations (ICLR).
  63. Coupling-based invertible neural networks are universal diffeomorphism approximators. Advances in Neural Information Processing Systems, 33:3362–3373.
  64. The us federal tox21 program: A strategic and operational plan for continued leadership. Altex, 35(2):163.
  65. Villani, C. (2009). Optimal transport: old and new, volume 338. Springer.
  66. Continuously indexed domain adaptation. In III, H. D. and Singh, A., editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 9898–9907. PMLR.
  67. Understanding gradual domain adaptation: Improved analysis, optimal path and beyond. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S., editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 22784–22801. PMLR.
  68. Moleculenet: a benchmark for molecular machine learning. Chemical science, 9(2):513–530.
  69. Pointflow: 3d point cloud generation with continuous normalizing flows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4541–4550.
  70. Future gradient descent for adapting the temporal shifting data distribution in online recommendation systems. In Cussens, J. and Zhang, K., editors, Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, volume 180 of Proceedings of Machine Learning Research, pages 2256–2266. PMLR.
  71. Autoencoder and its various variants. In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 415–419.
  72. Gradual domain adaptation via self-training of auxiliary models. arXiv preprint arXiv:2106.09890.
  73. On learning invariant representations for domain adaptation. In International Conference on Machine Learning, pages 7523–7532. PMLR.
  74. Multi-source domain adaptation in the deep learning era: A systematic survey. arXiv preprint arXiv:2002.12169.
  75. Active gradual domain adaptation: Dataset and approach. IEEE Transactions on Multimedia, 24:1210–1220.
  76. Online continual adaptation with active self-training. In International Conference on Artificial Intelligence and Statistics, pages 8852–8883. PMLR.
Citations (5)

Summary

We haven't generated a summary for this paper yet.