Toward Improving Robustness of Object Detectors Against Domain Shift (2403.12049v1)
Abstract: This paper proposes a data augmentation method for improving the robustness of driving object detectors against domain shift. Domain shift problem arises when there is a significant change between the distribution of the source data domain used in the training phase and that of the target data domain in the deployment phase. Domain shift is known as one of the most popular reasons resulting in the considerable drop in the performance of deep neural network models. In order to address this problem, one effective approach is to increase the diversity of training data. To this end, we propose a data synthesis module that can be utilized to train more robust and effective object detectors. By adopting YOLOv4 as a base object detector, we have witnessed a remarkable improvement in performance on both the source and target domain data. The code of this work is publicly available at https://github.com/tranleanh/haze-synthesis.
- A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
- L.-A. Tran and M.-H. Le, “Robust u-net-based road lane markings detection for autonomous driving,” in 2019 International Conference on System Science and Engineering (ICSSE). IEEE, 2019, pp. 62–66.
- L.-A. Tran, S. Moon, and D.-C. Park, “A novel encoder-decoder network with guided transmission map for single image dehazing,” Procedia Computer Science, vol. 204, pp. 682–689, 2022.
- C. Sakaridis, D. Dai, and L. Van Gool, “Semantic foggy scene understanding with synthetic data,” International Journal of Computer Vision, vol. 126, pp. 973–992, 2018.
- Y. Cai, T. Luan, H. Gao, H. Wang, L. Chen, Y. Li, M. A. Sotelo, and Z. Li, “Yolov4-5d: An effective and efficient object detector for autonomous driving,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–13, 2021.
- C. Godard, O. Mac Aodha, M. Firman, and G. J. Brostow, “Digging into self-supervised monocular depth estimation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 3828–3838.
- A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
- J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271.
- ——, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
- T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. Springer, 2014, pp. 740–755.
- S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8759–8768.
- K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 9, pp. 1904–1916, 2015.
- P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine et al., “Scalability in perception for autonomous driving: Waymo open dataset,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2446–2454.