BlenDA: Domain Adaptive Object Detection through diffusion-based blending (2401.09921v1)
Abstract: Unsupervised domain adaptation (UDA) aims to transfer a model learned using labeled data from the source domain to unlabeled data in the target domain. To address the large domain gap issue between the source and target domains, we propose a novel regularization method for domain adaptive object detection, BlenDA, by generating the pseudo samples of the intermediate domains and their corresponding soft domain labels for adaptation training. The intermediate samples are generated by dynamically blending the source images with their corresponding translated images using an off-the-shelf pre-trained text-to-image diffusion model which takes the text label of the target domain as input and has demonstrated superior image-to-image translation quality. Based on experimental results from two adaptation benchmarks, our proposed approach can significantly enhance the performance of the state-of-the-art domain adaptive object detector, Adversarial Query Transformer (AQT). Particularly, in the Cityscapes to Foggy Cityscapes adaptation, we achieve an impressive 53.4% mAP on the Foggy Cityscapes dataset, surpassing the previous state-of-the-art by 1.5%. It is worth noting that our proposed method is also applicable to various paradigms of domain adaptive object detection. The code is available at:https://github.com/aiiu-lab/BlenDA
- “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” NeurIPS, 2017.
- “Unbiased mean teacher for cross-domain object detection,” CVPR, 2021.
- “Unpaired image-to-image translation using cycle-consistent adversarial networks,” ICCV, 2017.
- “Cross-domain adaptive teacher for object detection,” CVPR, 2022.
- “Cross domain object detection by target-perceived dual branch distillation,” CVPR, 2022.
- “Contrastive mean teacher for domain adaptive object detectors,” CVPR, 2023.
- “Harmonious teacher for cross-domain object detection,” CVPR, 2023.
- “Instructpix2pix: Learning to follow image editing instructions,” arXiv preprint arXiv:2211.09800, 2022.
- “Confmix: Unsupervised domain adaptation for object detection,” WACV, 2023.
- “Mixup: Beyond empirical risk minimization,” ICLR, 2018.
- “Lossmix: Simplify and generalize mixup for object detection and beyond,” arXiv preprint arXiv:2303.10343, 2023.
- “Aqt: Adversarial query transformers for domain adaptive object detection,” IJCAI-ECAI, 2022.
- “The cityscapes dataset for semantic urban scene understanding,” CVPR, 2016.
- “Semantic foggy scene understanding with synthetic data,” IJCV, 2018.
- “Cross-domain detection via graph-induced prototype alignment,” CVPR, 2020.
- “End-to-end object detection with transformers,” ECCV, 2020.
- “Deformable detr: Deformable transformers for end-to-end object detection,” ICLR, 2021.
- “Exploring sequence feature alignment for domain adaptive detection transformers,” ACM MM, 2021.
- “Unsupervised domain adaptation by backpropagation,” ICML, 2015.
- “Learning domain adaptive object detection with probabilistic teacher,” ICML, 2022.
- “Sigma: Semantic-complete graph matching for domain adaptive object detection,” CVPR, 2022.
- “Scan: Cross domain object detection with semantic conditioned adaptation,” AAAI, 2022.
- “Unsupervised domain adaptation for one-stage object detector using offsets to bounding box,” ECCV, 2022.
- “Bdd100k: A diverse driving dataset for heterogeneous multitask learning,” CVPR, 2020.
- “Decoupled weight decay regularization,” ICLR, 2019.
- “Faster r-cnn: Towards real-time object detection with region proposal networks,” NIPS, 2015.
- “Fcos: Fully convolutional one-stage object detection,” ICCV, 2019.
- “Very deep convolutional networks for large-scale image recognition,” ICLR, 2015.
- “Deep residual learning for image recognition,” CVPR, 2016.