Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LEAD: Learning Decomposition for Source-free Universal Domain Adaptation (2403.03421v1)

Published 6 Mar 2024 in cs.CV, cs.AI, and cs.LG

Abstract: Universal Domain Adaptation (UniDA) targets knowledge transfer in the presence of both covariate and label shifts. Recently, Source-free Universal Domain Adaptation (SF-UniDA) has emerged to achieve UniDA without access to source data, which tends to be more practical due to data protection policies. The main challenge lies in determining whether covariate-shifted samples belong to target-private unknown categories. Existing methods tackle this either through hand-crafted thresholding or by developing time-consuming iterative clustering strategies. In this paper, we propose a new idea of LEArning Decomposition (LEAD), which decouples features into source-known and -unknown components to identify target-private data. Technically, LEAD initially leverages the orthogonal decomposition analysis for feature decomposition. Then, LEAD builds instance-level decision boundaries to adaptively identify target-private data. Extensive experiments across various UniDA scenarios have demonstrated the effectiveness and superiority of LEAD. Notably, in the OPDA scenario on VisDA dataset, LEAD outperforms GLC by 3.5% overall H-score and reduces 75% time to derive pseudo-labeling decision boundaries. Besides, LEAD is also appealing in that it is complementary to most existing methods. The code is available at https://github.com/ispc-lab/LEAD.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (69)
  1. Recognition in terra incognita. In ECCV, 2018.
  2. Richard Bellman. Dynamic programming. Science, 153(3731):34–37, 1966.
  3. Partial transfer learning with selective adversarial networks. In CVPR, 2018.
  4. Learning to transfer examples for partial domain adaptation. In CVPR, 2019.
  5. Unified optimal transport framework for universal domain adaptation. In NeurIPS, 2022.
  6. Learning to balance specificity and invariance for in and out of domain generalization. In ECCV. Springer, 2020.
  7. Geometric anchor correspondence mining with uncertainty modeling for universal domain adaptation. In CVPR, 2022.
  8. Domain adaptive faster r-cnn for object detection in the wild. In CVPR, 2018.
  9. No more discrimination: Cross city adaptation of road scene segmenters. In ICCV, 2017.
  10. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016.
  11. Optimal transport for domain adaptation. IEEE TPAMI, 39(9):1853–1865, 2016.
  12. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2020.
  13. Unsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss. In IJCAI, 2018.
  14. Open-set hypothesis transfer with semantic consistency. IEEE TIP, 30:6473–6484, 2021.
  15. Learning to detect open classes for universal domain adaptation. In ECCV. Springer, 2020.
  16. Domain-adversarial training of neural networks. JMLR, pages 2096–2030, 2016.
  17. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In ICLR, 2019.
  18. Deep residual learning for image recognition. In CVPR, 2016.
  19. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017.
  20. Cycada: Cycle-consistent adversarial domain adaptation. In ICML, 2018a.
  21. Cycada: Cycle-consistent adversarial domain adaptation. In ICML. PMLR, 2018b.
  22. Progressive domain adaptation for object detection. In WACV, 2020a.
  23. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In CVPR, 2020b.
  24. Training ood detectors in their natural habitats. In ICML, 2022.
  25. Segment anything. In ICCV, 2023.
  26. A review of domain adaptation without target labels. IEEE TPAMI, pages 766–785, 2019.
  27. Universal source-free domain adaptation. In CVPR, 2020.
  28. Domain consensus clustering for universal domain adaptation. In CVPR, 2021.
  29. Model adaptation: Unsupervised domain adaptation without source data. In CVPR, 2020.
  30. Domain invariant and class discriminative feature learning for visual domain adaptation. IEEE TIP, 2018.
  31. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In ICML. PMLR, 2020.
  32. Umad: Universal model adaptation under domain and category shift. arXiv preprint arXiv:2112.08553, 2021.
  33. Deja vu: Continual model generalization for unseen domains. In ICLR, 2023a.
  34. Decompose to adapt: Cross-domain object detection via feature disentanglement. IEEE TMM, 2022a.
  35. A source-free domain adaptive polyp detection framework with style diversification flow. IEEE TMI, pages 1897–1908, 2022.
  36. Coca: Classifier-oriented calibration for source-free universal domain adaptation via textual prototype. arXiv preprint arXiv:2308.10450, 2023b.
  37. Psdc: A prototype-based shared-dummy classifier model for open-set domain adaptation. IEEE Transactions on Cybernetics, 2022b.
  38. D2ifln: Disentangled domain-invariant feature learning networks for domain generalization. IEEE Transactions on Cognitive and Developmental Systems, 2023c.
  39. Transfer feature learning with joint distribution adaptation. In ICCV, 2013.
  40. Poem: Out-of-distribution detection with posterior sampling. In ICML, 2022.
  41. Mining data impressions from deep models as substitute for the unavailable training data. IEEE TPAMI, 2021.
  42. Open set domain adaptation. In ICCV, 2017.
  43. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.
  44. Moment matching for multi-source domain adaptation. In ICCV, 2019.
  45. Efficient domain generalization via common-specific low-rank decomposition. In ICML. PMLR, 2020.
  46. Bmd: A general class-balanced multicentric dynamic prototype strategy for source-free domain adaptation. In ECCV, 2022.
  47. Modality-agnostic debiasing for single domain generalization. In CVPR, 2023a.
  48. Upcycling models under domain and category shift. In CVPR, 2023b.
  49. Faster r-cnn: Towards real-time object detection with region proposal networks. In NeurIPS, 2015.
  50. Peter J Rousseeuw. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20:53–65, 1987.
  51. Adapting visual category models to new domains. In ECCV, 2010.
  52. Ovanet: One-vs-all network for universal domain adaptation. In ICCV, 2021.
  53. Maximum classifier discrepancy for unsupervised domain adaptation. In CVPR, 2018a.
  54. Open set domain adaptation by backpropagation. In ECCV, 2018b.
  55. Universal domain adaptation through self supervision. In NeurIPS, pages 16282–16292, 2020.
  56. Claude Elwood Shannon. A mathematical theory of communication. The Bell system technical journal, 27(3):379–423, 1948.
  57. Prior knowledge guided unsupervised domain adaptation. In ECCV, 2022.
  58. Vdm-da: Virtual domain modeling for source data-free domain adaptation. IEEE TCSVT, 2021.
  59. Dacs: Domain adaptation via cross-domain mixed sampling. In WACV, 2021.
  60. Deep hashing network for unsupervised domain adaptation. In CVPR, 2017.
  61. Paul Voigt and Axel Von dem Bussche. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 10(3152676):10–5555, 2017.
  62. High-frequency component helps explain the generalization of convolutional neural networks. In CVPR, 2020.
  63. Source data-free cross-domain semantic segmentation: Align, teach and propagate. arXiv preprint arXiv:2106.11653, 2021.
  64. Mitigating neural network overconfidence with logit normalization. In ICML, 2022.
  65. Semantically coherent out-of-distribution detection. In ICCV, 2021a.
  66. Generalized source-free domain adaptation. In ICCV, 2021b.
  67. Exploiting the intrinsic neighborhood structure for source-free domain adaptation. In NeurIPS, 2021c.
  68. Universal domain adaptation. In CVPR, 2019.
  69. Divide and contrast: Source-free domain adaptation via adaptive contrastive learning. In NeurIPS, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sanqing Qu (20 papers)
  2. Tianpei Zou (6 papers)
  3. Lianghua He (23 papers)
  4. Florian Röhrbein (5 papers)
  5. Alois Knoll (190 papers)
  6. Guang Chen (86 papers)
  7. Changjun Jiang (47 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.