Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Target-Relevant Knowledge Preservation for Multi-Source Domain Adaptive Object Detection (2204.07964v1)

Published 17 Apr 2022 in cs.CV

Abstract: Domain adaptive object detection (DAOD) is a promising way to alleviate performance drop of detectors in new scenes. Albeit great effort made in single source domain adaptation, a more generalized task with multiple source domains remains not being well explored, due to knowledge degradation during their combination. To address this issue, we propose a novel approach, namely target-relevant knowledge preservation (TRKP), to unsupervised multi-source DAOD. Specifically, TRKP adopts the teacher-student framework, where the multi-head teacher network is built to extract knowledge from labeled source domains and guide the student network to learn detectors in unlabeled target domain. The teacher network is further equipped with an adversarial multi-source disentanglement (AMSD) module to preserve source domain-specific knowledge and simultaneously perform cross-domain alignment. Besides, a holistic target-relevant mining (HTRM) scheme is developed to re-weight the source images according to the source-target relevance. By this means, the teacher network is enforced to capture target-relevant knowledge, thus benefiting decreasing domain shift when mentoring object detection in the target domain. Extensive experiments are conducted on various widely used benchmarks with new state-of-the-art scores reported, highlighting the effectiveness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Jiaxi Wu (10 papers)
  2. Jiaxin Chen (55 papers)
  3. Mengzhe He (2 papers)
  4. Yiru Wang (30 papers)
  5. Bo Li (1107 papers)
  6. Bingqi Ma (12 papers)
  7. Weihao Gan (22 papers)
  8. Wei Wu (482 papers)
  9. Yali Wang (78 papers)
  10. Di Huang (203 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.