mmPlace: Robust Place Recognition with Intermediate Frequency Signal of Low-cost Single-chip Millimeter Wave Radar (2403.04703v1)
Abstract: Place recognition is crucial for tasks like loop-closure detection and re-localization. Single-chip millimeter wave radar (single-chip radar in short) emerges as a low-cost sensor option for place recognition, with the advantage of insensitivity to degraded visual environments. However, it encounters two challenges. Firstly, sparse point cloud from single-chip radar leads to poor performance when using current place recognition methods, which assume much denser data. Secondly, its performance significantly declines in scenarios involving rotational and lateral variations, due to limited overlap in its field of view (FOV). We propose mmPlace, a robust place recognition system to address these challenges. Specifically, mmPlace transforms intermediate frequency (IF) signal into range azimuth heatmap and employs a spatial encoder to extract features. Additionally, to improve the performance in scenarios involving rotational and lateral variations, mmPlace employs a rotating platform and concatenates heatmaps in a rotation cycle, effectively expanding the system's FOV. We evaluate mmPlace's performance on the milliSonic dataset, which is collected on the University of Science and Technology of China (USTC) campus, the city roads surrounding the campus, and an underground parking garage. The results demonstrate that mmPlace outperforms point cloud-based methods and achieves 87.37% recall@1 in scenarios involving rotational and lateral variations.
- P. Yin, S. Zhao, I. Cisneros, A. Abuduweili, G. Huang, and M. Milford, “General Place Recognition Survey: Towards the real-world autonomy age,” arXiv preprint arXiv:2209.04497, 2022.
- D. Barnes, M. Gadd, P. Murcutt, P. Newman, and I. Posner, “The Oxford Radar RobotCar Dataset: A radar extension to the oxford robotcar dataset,” in Proc. IEEE Int. Conf. Robot. Autom., 2020.
- H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, and A. Krishnan, “nuScenes: A multimodal dataset for autonomous driving,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020.
- S. M. Lowry, N. Sünderhauf, P. Newman, J. J. Leonard, D. D. Cox, P. I. Corke, and M. J. Milford, “Visual Place Recognition: A survey,” IEEE Trans. Robot., vol. 32, no. 1, pp. 1–19, 2016.
- D.-H. Paek, S.-H. Kong, and K. T. Wijaya, “K-radar: 4d radar object detection for autonomous driving in various weather conditions,” Proc. Adv. Neural Inf. Process. Syst., vol. 35, pp. 3819–3829, 2022.
- K. Cai, B. Wang, and C. X. Lu, “Autoplace: Robust place recognition with low-cost single-chip automotive radar,” in Proc. IEEE Int. Conf. Robot. Autom. IEEE, 2022, pp. 3475–3481.
- Ş. Săftescu, M. Gadd, D. De Martini, and D. Barnes, “Kidnapped Radar: Topological radar localisation using rotationally-invariant metric learning,” in Proc. IEEE Int. Conf. Robot. Autom. IEEE, 2020.
- K. Qian, S. Zhu, X. Zhang, and L. E. Li, “Robust multimodal vehicle detection in foggy weather using complementary lidar and radar signals,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021.
- K. Harlow, H. Jang, T. D. Barfoot, A. Kim, and C. Heckman, “A New Wave in Robotics: Survey on recent mmwave radar applications in robotics,” arXiv preprint arXiv:2305.01135, 2023.
- G. Kim and A. Kim, “Scan Context: Egocentric spatial descriptor for place recognition within 3d point cloud map,” in Proc. Int. Conf. Intell. Robot. Syst. IEEE, 2018, pp. 4802–4809.
- G. Kim, S. Choi, and A. Kim, “Scan Context++: Structural place recognition robust to rotation and lateral variations in urban environments,” IEEE Trans. Robot., vol. 38, no. 3, pp. 1856–1874, 2022.
- K. Vidanapathirana, S. Sridharan, and C. Fookes, “LoGG3D-Net: Locally guided global descriptor learning for 3d place recognition,” in Proc. IEEE Int. Conf. Robot. Autom. IEEE, 2022, pp. 2215–2221.
- Z. Hong, Y. Petillot, and S. Wang, “RadarSLAM: Radar based large-scale slam in all weathers,” in Proc. Int. Conf. Intell. Robot. Syst., 2020, pp. 5164–5170.
- D. Barnes and I. Posner, “Under the Radar: Learning to predict robust keypoints for odometry estimation and metric localisation in radar,” in Proc. IEEE Int. Conf. Robot. Autom. IEEE, 2020.
- J. Rebut, A. Ouaknine, W. Malik, and P. Pérez, “Raw high-definition radar for multi-task learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. IEEE, 2022, pp. 17 000–17 009.
- X. Gao, G. Xing, S. Roy, and H. Liu, “RAMP-CNN: A novel neural network for enhanced automotive radar object recognition,” IEEE Sensors J., vol. 21, no. 4, pp. 5119–5132, 2020.
- M. J. Cummins and P. M. Newman, “FAB-MAP: probabilistic localization and mapping in the space of appearance,” Int. J. Robot. Res., vol. 27, no. 6, pp. 647–665, 2008.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Int. Conf. Learn. Represent., 2015.
- I. Namatēvs, “Deep convolutional neural networks: Structure, feature extraction and training,” Inf. Technol. Manage. Sci., vol. 20, no. 1, pp. 40–47, 2017.
- R. Arandjelovic, P. Gronát, A. Torii, T. Pajdla, and J. Sivic, “NetVLAD: CNN architecture for weakly supervised place recognition,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 1437–1451, 2018.
- S. Hausler, M. Xu, and M. Milford, “Patch-NetVLAD: Multi-scale fusion of locally-global descriptors for place recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. IEEE, 2021, pp. 14 141–14 152.
- L. He, X. Wang, and H. Zhang, “M2DP: A novel 3d point cloud descriptor and its application in loop closure detection,” in Proc. Int. Conf. Intell. Robot. Syst. IEEE, 2016, pp. 231–237.
- C. Iovescu and S. Rao, “The fundamentals of millimeter wave sensors,” Texas Instruments, pp. 1–8, 2017.
- J. Kim, Y. Kim, and D. Kum, “Low-level sensor fusion network for 3d vehicle detection using radar range-azimuth heatmap and monocular image,” in Proc. Asian Conf. Comput. Vis., 2020.
- X. Li, X. Wang, Q. Yang, and S. Fu, “Signal processing for TDM MIMO FMCW millimeter-wave radar sensors,” IEEE Access, vol. 9, pp. 167 959–167 971, 2021.
- J. Johnson, M. Douze, and H. Jégou, “Billion-scale similarity search with gpus,” IEEE Trans. Big Data, vol. 7, no. 3, pp. 535–547, 2021.
- Y. Duan, J. Peng, Y. Zhang, J. Ji, and Y. Zhang, “PFilter: Building persistent maps through feature filtering for fast and accurate lidar-based SLAM,” in Proc. Int. Conf. Intell. Robot. Syst. IEEE, 2022.
- J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, and J. Philbin, “Learning fine-grained image similarity with deep ranking,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. IEEE, 2014.
- E. Pan, J. Tang, D. Kosaka, R. Yao, and A. Gupta, “OpenRadar,” https://github.com/presenseradar/openradar, 2019.
- J.-T. Lin, D. Dai, and L. Van Gool, “Depth estimation from monocular images and sparse radar data,” in Proc. Int. Conf. Intell. Robot. Syst., 2020.