DA-Ada: Learning Domain-Aware Adapter for Domain Adaptive Object Detection (2410.09004v1)
Abstract: Domain adaptive object detection (DAOD) aims to generalize detectors trained on an annotated source domain to an unlabelled target domain. As the visual-LLMs (VLMs) can provide essential general knowledge on unseen images, freezing the visual encoder and inserting a domain-agnostic adapter can learn domain-invariant knowledge for DAOD. However, the domain-agnostic adapter is inevitably biased to the source domain. It discards some beneficial knowledge discriminative on the unlabelled domain, i.e., domain-specific knowledge of the target domain. To solve the issue, we propose a novel Domain-Aware Adapter (DA-Ada) tailored for the DAOD task. The key point is exploiting domain-specific knowledge between the essential general knowledge and domain-invariant knowledge. DA-Ada consists of the Domain-Invariant Adapter (DIA) for learning domain-invariant knowledge and the Domain-Specific Adapter (DSA) for injecting the domain-specific knowledge from the information discarded by the visual encoder. Comprehensive experiments over multiple DAOD tasks show that DA-Ada can efficiently infer a domain-aware visual encoder for boosting domain adaptive object detection. Our code is available at https://github.com/Therock90421/DA-Ada.
- Haochen Li (42 papers)
- Rui Zhang (1138 papers)
- Hantao Yao (23 papers)
- Xin Zhang (904 papers)
- Yifan Hao (28 papers)
- Xinkai Song (6 papers)
- Xiaqing Li (7 papers)
- Yongwei Zhao (9 papers)
- Ling Li (112 papers)
- Yunji Chen (51 papers)