Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Radar Ghost Dataset -- An Evaluation of Ghost Objects in Automotive Radar Data (2404.01437v1)

Published 1 Apr 2024 in cs.CV

Abstract: Radar sensors have a long tradition in advanced driver assistance systems (ADAS) and also play a major role in current concepts for autonomous vehicles. Their importance is reasoned by their high robustness against meteorological effects, such as rain, snow, or fog, and the radar's ability to measure relative radial velocity differences via the Doppler effect. The cause for these advantages, namely the large wavelength, is also one of the drawbacks of radar sensors. Compared to camera or lidar sensor, a lot more surfaces in a typical traffic scenario appear flat relative to the radar's emitted signal. This results in multi-path reflections or so called ghost detections in the radar signal. Ghost objects pose a major source for potential false positive detections in a vehicle's perception pipeline. Therefore, it is important to be able to segregate multi-path reflections from direct ones. In this article, we present a dataset with detailed manual annotations for different kinds of ghost detections. Moreover, two different approaches for identifying these kinds of objects are evaluated. We hope that our dataset encourages more researchers to engage in the fields of multi-path object suppression or exploitation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. N. Scheiner, F. Kraus, F. Wei, B. Phan, F. Mannan, N. Appenrodt, W. Ritter, J. Dickmann, K. Dietmayer, B. Sick, and F. Heide, “Seeing Around Street Corners: Non-Line-of-Sight Detection and Tracking In-the-Wild Using Doppler Radar,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, jun 2020, pp. 2068–2077.
  2. F. Kraus, N. Scheiner, W. Ritter, and K. Dietmayer, “Using Machine Learning to Detect Ghost Images in Automotive Radar,” in IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC).   Las Vegas, NV, USA: IEEE, sep 2020, pp. 1–7.
  3. W. Wang, R. Yu, Q. Huang, and U. Neumann, “SGPN: Similarity Group Proposal Network for 3D Point Cloud Instance Segmentation,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  4. A. Kamann, P. Held, F. Perras, P. Zaumseil, T. Brandmeier, and U. T. Schwarz, “Automotive Radar Multipath Propagation in Uncertain Environments,” in 21st Intelligent Transportation Systems Conference (ITSC).   Maui, HI, USA: IEEE, nov 2018, pp. 859–864.
  5. R. Prophet, J. Martinez, J.-C. F. Michel, R. Ebelt, I. Weber, and M. Vossiek, “Instantaneous Ghost Detection Identification in Automotive Scenarios,” in IEEE Radar Conference (RadarConf), apr 2019.
  6. J. M. Garcia, R. Prophet, J. C. F. Michel, R. Ebelt, M. Vossiek, and I. Weber, “Identification of Ghost Moving Detections in Automotive Scenarios with Deep Learning,” in IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), 2019.
  7. I. Vermesan, D. Carsenat, C. Decroze, and S. Reynaud, “Ghost Image Cancellation Algorithm Through Numeric Beamforming for Multi-Antenna Radar Imaging,” IET Radar, Sonar and Navigation, vol. 7, no. 5, pp. 480–488, 2013.
  8. A. Sume, M. Gustafsson, M. Herberthson, A. Janis, S. Nilsson, J. Rahm, and A. Orbom, “Radar Detection of Moving Targets Behind Corners,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 6, pp. 2259–2267, 2011.
  9. L. Qiu, T. Jin, and Z. Zhou, “Multipath Model and Ghosts Localization in Ultra-Wide Band Virtual Aperture Radar,” in 12th International Conference on Signal Processing (ICSP).   Hangzhou, China: IEEE, oct 2014, pp. 2149–2152.
  10. R. Zetik, M. Eschrich, S. Jovanoska, and R. S. Thoma, “Looking Behind a Corner Using Multipath-Exploiting UWB Radar,” IEEE Transactions on aerospace and electronic systems, vol. 51, no. 3, pp. 1916–1926, 2015.
  11. O. Rabaste, J. Bosse, D. Poullin, I. Hinostroza, T. Letertre, T. Chonavel et al., “Around-the-Corner Radar: Detection and Localization of a Target in Non-Line of Sight,” in IEEE Radar Conference (RadarConf).   IEEE, May 2017, pp. 0842–0847.
  12. F. Roos, M. Sadeghi, J. Bechter, N. Appenrodt, J. Dickmann, and C. Waldschmidt, “Ghost Target Identification by Analysis of the Doppler Distribution in Automotive Scenarios,” in 18th International Radar Symposium (IRS).   Prague, Czech Republic: IEEE, jun 2017.
  13. M. Meyer and G. Kuschk, “Automotive Radar Dataset for Deep Learning Based 3D Object Detection,” in 16th European Radar Conference (EuRAD).   Paris, France: IEEE, Oct 2019, pp. 129–132.
  14. H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuScenes: A multimodal dataset for autonomous driving,” arXiv:1903.11027 preprint, 2019.
  15. A. Ouaknine, A. Newson, J. Rebut, F. Tupin, and P. Pérez, “CARRADA Dataset: Camera and Automotive Radar with Range-Angle-Doppler Annotations,” 2020.
  16. M. Mostajabi, C. M. Wang, D. Ranjan, and G. Hsyu, “High Resolution Radar Dataset for Semi-Supervised Learning of Dynamic Objects,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, pp. 450–457.
  17. N. Scheiner, N. Appenrodt, J. Dickmann, and B. Sick, “A Multi-Stage Clustering Framework for Automotive Radar Data,” in IEEE 22nd Intelligent Transportation Systems Conference (ITSC).   Auckland, New Zealand: IEEE, oct 2019, pp. 2060–2067.
  18. ——, “Radar-based Feature Design and Multiclass Classification for Road User Recognition,” in IEEE Intelligent Vehicles Symposium (IV).   Changshu, China: IEEE, Jun 2018, pp. 779–786.
  19. O. Schumann, J. Lombacher, M. Hahn, C. Wöhler, and J. Dickmann, “Scene understanding with automotive radar,” IEEE Transactions on Intelligent Vehicles, vol. 5, no. 2, pp. 188–203, 2020.
  20. R. Q. Charles, H. Su, M. Kaichun, and L. J. Guibas, “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, Jul 2017, pp. 77–85.
  21. C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space,” in 31st International Conference on Neural Information Processing Systems (NIPS).   Long Beach, CA, USA: Curran Associates Inc., dec 2017, pp. 5105–5114.
  22. M. Simonovsky and N. Komodakis, “Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, Jul 2017, pp. 29–38.
  23. Y. Xu, T. Fan, M. Xu, L. Zeng, and Y. Qiao, “SpiderCNN: Deep Learning on Point Sets with Parameterized Convolutional Filters,” in European Conference on Computer Vision (ECCV), sep 2018.
  24. M. Atzmon, H. Maron, and Y. Lipman, “Point convolutional neural networks by extension operators,” ACM Transactions on Graphics, vol. 37, no. 4, pp. 1–12, Aug 2018.
  25. Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen, “PointCNN: Convolution On X-Transformed Points,” in Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., vol. 31.   Curran Associates, Inc., 2018.
  26. W. Wu, Z. Qi, and L. Fuxin, “PointConv: Deep Convolutional Networks on 3D Point Clouds,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, Jun 2019, pp. 9613–9622.
  27. H. Thomas, C. R. Qi, J.-E. Deschaud, B. Marcotegui, F. Goulette, and L. Guibas, “KPConv: Flexible and Deformable Convolution for Point Clouds,” in IEEE/CVF International Conference on Computer Vision (ICCV).   IEEE, Oct 2019, pp. 6410–6419.
  28. O. Schumann, M. Hahn, J. Dickmann, and C. Wöhler, “Semantic Segmentation on Radar Point Clouds,” in 21st International Conference on Information Fusion (FUSION).   Cambridge, UK: IEEE, jul 2018, pp. 2179–2186.
  29. A. Danzer, T. Griebel, M. Bach, and K. Dietmayer, “2d car detection in radar data with pointnets,” in IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 61–66.
  30. H. Rohling, “Radar CFAR Thresholding in Clutter and Multiple Target Situations,” IEEE Transactions on Aerospace and Electronic Systems, vol. AES-19, no. 4, pp. 608–621, 1983.
  31. J. Liu, L. Kong, X. Yang, and Q. H. Liu, “First-Order Multipath Ghosts’ Characteristics and Suppression in MIMO Through-Wall Imaging,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 9, pp. 1315–1319, 2016.
  32. N. Scheiner, S. Haag, N. Appenrodt, B. Duraisamy, J. Dickmann, M. Fritzsche, and B. Sick, “Automated Ground Truth Estimation For Automotive Radar Tracking Applications With Portable GNSS And IMU Devices,” in 20th International Radar Symposium (IRS).   Ulm, Germany: IEEE, jun 2019.
  33. M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge: A retrospective,” International Journal of Computer Vision, vol. 111, no. 1, pp. 98–136, Jan 2015.
Citations (13)

Summary

We haven't generated a summary for this paper yet.