BAM: Box Abstraction Monitors for Real-time OoD Detection in Object Detection (2403.18373v1)
Abstract: Out-of-distribution (OoD) detection techniques for deep neural networks (DNNs) become crucial thanks to their filtering of abnormal inputs, especially when DNNs are used in safety-critical applications and interact with an open and dynamic environment. Nevertheless, integrating OoD detection into state-of-the-art (SOTA) object detection DNNs poses significant challenges, partly due to the complexity introduced by the SOTA OoD construction methods, which require the modification of DNN architecture and the introduction of complex loss functions. This paper proposes a simple, yet surprisingly effective, method that requires neither retraining nor architectural change in object detection DNN, called Box Abstraction-based Monitors (BAM). The novelty of BAM stems from using a finite union of convex box abstractions to capture the learned features of objects for in-distribution (ID) data, and an important observation that features from OoD data are more likely to fall outside of these boxes. The union of convex regions within the feature space allows the formation of non-convex and interpretable decision boundaries, overcoming the limitations of VOS-like detectors without sacrificing real-time performance. Experiments integrating BAM into Faster R-CNN-based object detection DNNs demonstrate a considerably improved performance against SOTA OoD detection techniques.
- S. Abrecht, A. Hirsch, S. Raafatnia, and M. Woehrle, “Deep learning safety concerns in automated driving perception,” arXiv preprint arXiv:2309.03774, 2023.
- K. Li, K. Chen, H. Wang, L. Hong, C. Ye, J. Han, Y. Chen, W. Zhang, C. Xu, D.-Y. Yeung, et al., “CODA: A real-world road corner case dataset for object detection in autonomous driving,” in ECCV, pp. 406–423, Springer, 2022.
- X. Du, Z. Wang, M. Cai, and Y. Li, “VOS: Learning what you don’t know by virtual outlier synthesis,” ICLR, 2022.
- D. Hendrycks and K. Gimpel, “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” in ICLR, 2016.
- S. Liang, Y. Li, and R. Srikant, “Enhancing the reliability of out-of-distribution image detection in neural networks,” in ICLR, 2018.
- W. Liu, X. Wang, J. Owens, and Y. Li, “Energy-based out-of-distribution detection,” NeurIPS, vol. 33, pp. 21464–21475, 2020.
- M. R. Nallapareddy, K. Sirohi, P. L. Drews-Jr, W. Burgard, C.-H. Cheng, and A. Valada, “EvCenterNet: Uncertainty estimation for object detection using evidential learning,” in IROS, pp. 5699–5706, IEEE, 2023.
- T. A. Henzinger, A. Lukina, and C. Schilling, “Outside the box: Abstraction-based monitoring of neural networks,” in ECAI, pp. 2433–2440, IOS Press, 2020.
- C.-H. Cheng, C.-H. Huang, T. Brunner, and V. Hashemi, “Towards safety verification of direct perception neural networks,” in DATE, pp. 1640–1643, IEEE, 2020.
- C.-H. Cheng, C. Wu, E. Seferis, and S. Bensalem, “Prioritizing corners in ood detectors via symbolic string manipulation,” in ATVA, pp. 397–413, Springer, 2022.
- C. Wu, Y. Falcone, and S. Bensalem, “Customizable reference runtime monitoring of neural networks using resolution boxes,” in RV, pp. 23–41, Springer, 2023.
- S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” NeurIPS, vol. 28, 2015.
- A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the KITTI vision benchmark suite,” in CVPR, pp. 3354–3361, IEEE, 2012.
- F. Yu, H. Chen, X. Wang, W. Xian, Y. Chen, F. Liu, V. Madhavan, and T. Darrell, “BDD100K: A diverse driving dataset for heterogeneous multitask learning,” in CVPR, pp. 2636–2645, IEEE, 2020.
- M. Salehi, H. Mirzaei, D. Hendrycks, Y. Li, M. Rohban, M. Sabokrou, et al., “A unified survey on anomaly, novelty, open-set, and out of-distribution detection: Solutions and future challenges,” Transactions on Machine Learning Research, no. 234, 2022.
- D. Miller, L. Nicholson, F. Dayoub, and N. Sünderhauf, “Dropout sampling for robust object detection in open-set conditions,” in ICRA, pp. 3243–3249, IEEE, 2018.
- A. Harakeh, M. Smart, and S. L. Waslander, “BayesOD: A Bayesian approach for uncertainty estimation in deep object detectors,” in ICRA, pp. 87–93, IEEE, 2020.
- F. Kraus and K. Dietmayer, “Uncertainty estimation in one-stage object detection,” in ITSC, pp. 53–60, IEEE, 2019.
- X. Du, G. Gozum, Y. Ming, and Y. Li, “Siren: Shaping representations for detecting out-of-distribution objects,” NeurIPS, vol. 35, pp. 20434–20449, 2022.
- S. Gasperini, J. Haug, M.-A. N. Mahani, A. Marcos-Ramiro, N. Navab, B. Busam, and F. Tombari, “CertainNet: Sampling-free uncertainty estimation for object detection,” RA-L, vol. 7, no. 2, pp. 698–705, 2021.
- K. Duan, S. Bai, L. Xie, H. Qi, Q. Huang, and Q. Tian, “CenterNet: Keypoint triplets for object detection,” in ICCV, pp. 6569–6578, IEEE, 2019.
- M. Sensoy, L. Kaplan, and M. Kandemir, “Evidential deep learning to quantify classification uncertainty,” NeurIPS, vol. 31, pp. 3183–3193, 2018.
- S. Lloyd, “Least squares quantization in pcm,” IEEE Transactions on information theory, vol. 28, no. 2, pp. 129–137, 1982.
- Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick, “Detectron2.” https://github.com/facebookresearch/detectron2, 2019.
- B. E. Moore and J. J. Corso, “Fiftyone,” GitHub Note: https://github.com/voxel51/fiftyone, 2020.
- T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: Common objects in context,” in ECCV, pp. 740–755, Springer, 2014.
- A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, A. Kolesnikov, T. Duerig, and V. Ferrari, “The open images dataset v4,” IJCV, vol. 128, no. 7, pp. 1956–1981, 2020.
- M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal visual object classes (VOC) challenge,” IJCV, vol. 88, no. 2, pp. 303–338, 2010.