Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DREAM-PCD: Deep Reconstruction and Enhancement of mmWave Radar Pointcloud (2309.15374v1)

Published 27 Sep 2023 in eess.IV and cs.RO

Abstract: Millimeter-wave (mmWave) radar pointcloud offers attractive potential for 3D sensing, thanks to its robustness in challenging conditions such as smoke and low illumination. However, existing methods failed to simultaneously address the three main challenges in mmWave radar pointcloud reconstruction: specular information lost, low angular resolution, and strong interference and noise. In this paper, we propose DREAM-PCD, a novel framework that combines signal processing and deep learning methods into three well-designed components to tackle all three challenges: Non-Coherent Accumulation for dense points, Synthetic Aperture Accumulation for improved angular resolution, and Real-Denoise Multiframe network for noise and interference removal. Moreover, the causal multiframe and "real-denoise" mechanisms in DREAM-PCD significantly enhance the generalization performance. We also introduce RadarEyes, the largest mmWave indoor dataset with over 1,000,000 frames, featuring a unique design incorporating two orthogonal single-chip radars, lidar, and camera, enriching dataset diversity and applications. Experimental results demonstrate that DREAM-PCD surpasses existing methods in reconstruction quality, and exhibits superior generalization and real-time capabilities, enabling high-quality real-time reconstruction of radar pointcloud under various parameters and scenarios. We believe that DREAM-PCD, along with the RadarEyes dataset, will significantly advance mmWave radar perception in future real-world applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (84)
  1. J.-S. Gutmann, M. Fukuchi, and M. Fujita, “3d perception and environment map generation for humanoid robot navigation,” The International Journal of Robotics Research, vol. 27, no. 10, pp. 1117–1134, 2008.
  2. J. Deng, S. Shi, P. Li, W. Zhou, Y. Zhang, and H. Li, “Voxel r-cnn: Towards high performance voxel-based 3d object detection,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), vol. 35, no. 2, 2021, pp. 1201–1209.
  3. Y. Zhou and O. Tuzel, “Voxelnet: End-to-end learning for point cloud based 3d object detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2018, pp. 4490–4499.
  4. R. B. Rusu and S. Cousins, “3d is here: Point cloud library (pcl),” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2011, pp. 1–4.
  5. Y.-T. Hu, J. Wang, R. A. Yeh, and A. G. Schwing, “Sail-vos 3d: A synthetic dataset and baselines for object detection and 3d mesh reconstruction from video data,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2021, pp. 1418–1428.
  6. F. Pomerleau, F. Colas, R. Siegwart et al., “A review of point cloud registration algorithms for mobile robotics,” Foundations and Trends® in Robotics, vol. 4, no. 1, pp. 1–104, 2015.
  7. W. Shi and R. Rajkumar, “Point-gnn: Graph neural network for 3d object detection in a point cloud,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2020, pp. 1711–1719.
  8. A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “Pointpillars: Fast encoders for object detection from point clouds,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2019, pp. 12 697–12 705.
  9. J. Schulman, A. Lee, J. Ho, and P. Abbeel, “Tracking deformable objects with point clouds,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2013, pp. 1130–1137.
  10. H. Qi, C. Feng, Z. Cao, F. Zhao, and Y. Xiao, “P2b: Point-to-box network for 3d object tracking in point clouds,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2020, pp. 6329–6338.
  11. L. Landrieu and M. Simonovsky, “Large-scale point cloud semantic segmentation with superpoint graphs,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2018, pp. 4558–4567.
  12. L. Wang, Y. Huang, Y. Hou, S. Zhang, and J. Shan, “Graph attention convolution for point cloud semantic segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2019, pp. 10 296–10 305.
  13. R. Roriz, J. Cabral, and T. Gomes, “Automotive lidar technology: A survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 6282–6297, 2021.
  14. A. T.-Y. Chen, M. Biglari-Abhari, I. Kevin, and K. Wang, “Context is king: Privacy perceptions of camera-based surveillance,” in 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS).   IEEE, 2018, pp. 1–6.
  15. J. Vargas, S. Alsweiss, O. Toker, R. Razdan, and J. Santos, “An overview of autonomous vehicles sensors and their vulnerability to weather conditions,” Sensors, vol. 21, no. 16, p. 5397, 2021.
  16. J. Guan, S. Madani, S. Jog, S. Gupta, and H. Hassanieh, “Through fog high-resolution imaging using millimeter wave radar,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2020, pp. 11 464–11 473.
  17. Y. Li, D. Zhang, J. Chen, J. Wan, D. Zhang, Y. Hu, Q. Sun, and Y. Chen, “Towards domain-independent and real-time gesture recognition using mmwave signal,” IEEE Transactions on Mobile Computing, pp. 1–15, 2022.
  18. B.-B. Zhang, D. Zhang, Y. Li, Y. Hu, and Y. Chen, “Unsupervised domain adaptation for rf-based gesture recognition,” IEEE Internet of Things Journal, pp. 1–1, 2023.
  19. Y. Zhou, L. Liu, H. Zhao, M. López-Benítez, L. Yu, and Y. Yue, “Towards deep radar perception for autonomous driving: Datasets, methods, and challenges,” Sensors, vol. 22, no. 11, p. 4208, 2022.
  20. A. Prabhakara, T. Jin, A. Das, G. Bhatt, L. Kumari, E. Soltanaghaei, J. Bilmes, S. Kumar, and A. Rowe, “High resolution point clouds from mmwave radar,” arXiv preprint arXiv:2206.09273, 2022.
  21. Y. Cheng, J. Su, M. Jiang, and Y. Liu, “A novel radar point cloud generation method for robot environment perception,” IEEE Transactions on Robotics, vol. 38, no. 6, pp. 3754–3773, 2022.
  22. K. Qian, Z. He, and X. Zhang, “3d point cloud generation with millimeter-wave radar,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 4, no. 4, pp. 1–23, 2020.
  23. P. Cai and S. Sur, “Millipcd: Beyond traditional vision indoor point cloud generation via handheld millimeter-wave devices,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, no. 4, pp. 1–24, 2023.
  24. M. Jiang, G. Xu, H. Pei, Z. Feng, S. Ma, H. Zhang, and W. Hong, “4d high-resolution imagery of point clouds for automotive mmwave radar,” IEEE Transactions on Intelligent Transportation Systems, 2023.
  25. Y. Sun, H. Zhang, Z. Huang, and B. Liu, “Deeppoint: A deep learning model for 3d reconstruction in point clouds via mmwave radar,” arXiv preprint arXiv:2109.09188, 2021.
  26. Y. Sun, Z. Huang, H. Zhang, Z. Cao, and D. Xu, “3drimr: 3d reconstruction and imaging via mmwave radar based on deep learning,” in 2021 IEEE International Performance, Computing, and Communications Conference (IPCCC).   IEEE, 2021, pp. 1–8.
  27. Y. Xing, O. Kanhere, S. Ju, and T. S. Rappaport, “Indoor wireless channel properties at millimeter wave and sub-terahertz frequencies,” in 2019 IEEE Global Communications Conference (GLOBECOM).   IEEE, 2019, pp. 1–6.
  28. C. Oliver, “Synthetic-aperture radar imaging,” Journal of Physics D: Applied Physics, vol. 22, no. 7, p. 871, 1989.
  29. W. Pu, “Deep sar imaging and motion compensation,” IEEE Transactions on Image Processing, vol. 30, pp. 2232–2247, 2021.
  30. R. L. Haupt and Y. Rahmat-Samii, “Antenna array developments: A perspective on the past, present and future,” IEEE Antennas and Propagation Magazine, vol. 57, no. 1, pp. 86–96, 2015.
  31. G. Gennarelli and F. Soldovieri, “Multipath ghosts in radar imaging: Physical insight and mitigation strategies,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 3, pp. 1078–1086, 2014.
  32. J. Xiong and K. Jamieson, “Arraytrack: A fine-grained indoor location system,” in 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI 13), 2013, pp. 71–84.
  33. Y.-J. Li, S. Hunt, J. Park, M. O’Toole, and K. Kitani, “Azimuth super-resolution for fmcw radar in autonomous driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 17 504–17 513.
  34. H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CV¡font color=Fuchsia face=”Arial, STXingkai”¿¡/font¿), 2020, pp. 11 621–11 631.
  35. O. Schumann, M. Hahn, N. Scheiner, F. Weishaupt, J. F. Tilly, J. Dickmann, and C. Wöhler, “Radarscenes: A real-world radar point cloud data set for automotive applications,” in 2021 IEEE 24th International Conference on Information Fusion (FUSION).   IEEE, 2021, pp. 1–8.
  36. K. Bansal, K. Rungta, S. Zhu, and D. Bharadia, “Pointillism: Accurate 3d bounding box estimation with multi-radars,” in Proceedings of the 18th Conference on Embedded Networked Sensor Systems, 2020, pp. 340–353.
  37. T.-Y. Lim, S. A. Markowitz, and M. N. Do, “Radical: A synchronized fmcw radar, depth, imu and rgb camera data dataset with low-level fmcw radar signals,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 4, pp. 941–953, 2021.
  38. M. Mostajabi, C. M. Wang, D. Ranjan, and G. Hsyu, “High-resolution radar dataset for semi-supervised learning of dynamic objects,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), 2020, pp. 100–101.
  39. A. Kramer, K. Harlow, C. Williams, and C. Heckman, “Coloradar: The direct 3d millimeter wave radar dataset,” The International Journal of Robotics Research, vol. 41, no. 4, pp. 351–360, 2022.
  40. C. Waldschmidt, J. Hasch, and W. Menzel, “Automotive radar—from first efforts to future systems,” IEEE Journal of Microwaves, vol. 1, no. 1, pp. 135–148, 2021.
  41. A. Sengupta, F. Jin, R. A. Cuevas, and S. Cao, “A review of recent advancements including machine learning on synthetic aperture radar using millimeter-wave radar,” in IEEE Radar Conference (RadarConf).   IEEE, 2020, pp. 1–6.
  42. F. Adib, C.-Y. Hsu, H. Mao, D. Katabi, and F. Durand, “Capturing the human figure through a wall,” ACM Trans. Graph., vol. 34, no. 6, 2015.
  43. Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen, “Pointcnn: Convolution on x-transformed points,” Advances in neural information processing systems (NeurIPS), vol. 31, 2018.
  44. W. Wu, Z. Qi, and L. Fuxin, “Pointconv: Deep convolutional networks on 3d point clouds,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2019, pp. 9621–9630.
  45. Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, “Dynamic graph cnn for learning on point clouds,” Acm Transactions On Graphics (tog), vol. 38, no. 5, pp. 1–12, 2019.
  46. W. Hu, Q. Hu, Z. Wang, and X. Gao, “Dynamic point cloud denoising via manifold-to-manifold distance,” IEEE Transactions on Image Processing, vol. 30, pp. 6168–6183, 2021.
  47. W. Wang, Q. Huang, S. You, C. Yang, and U. Neumann, “Shape inpainting using 3d generative adversarial network and recurrent convolutional networks,” in Proceedings of the IEEE international conference on computer vision (ICCV), 2017, pp. 2298–2306.
  48. X. Pan, Z. Xia, S. Song, L. E. Li, and G. Huang, “3d object detection with pointformer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 7463–7472.
  49. H. Zhao, L. Jiang, J. Jia, P. H. Torr, and V. Koltun, “Point transformer,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 16 259–16 268.
  50. S. Luo and W. Hu, “Score-based point cloud denoising,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 4583–4592.
  51. P. Hermosilla, T. Ritschel, and T. Ropinski, “Total denoising: Unsupervised learning of 3d point cloud cleaning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
  52. S. Rao, “Introduction to mmwave sensing: Fmcw radars,” Texas Instruments (TI) mmWave Training Series, pp. 1–11, 2017.
  53. P. P. Gandhi and S. A. Kassam, “Analysis of cfar processors in nonhomogeneous background,” IEEE Transactions on Aerospace and Electronic systems, vol. 24, no. 4, pp. 427–445, 1988.
  54. L. M. Ulander, H. Hellsten, and G. Stenstrom, “Synthetic-aperture radar processing using fast factorized back-projection,” IEEE Transactions on Aerospace and electronic systems, vol. 39, no. 3, pp. 760–776, 2003.
  55. X. Gao, S. Roy, and G. Xing, “Mimo-sar: A hierarchical high-resolution imaging algorithm for mmwave fmcw radar in autonomous driving,” IEEE Transactions on Vehicular Technology, vol. 70, no. 8, pp. 7322–7334, 2021.
  56. W. Qiu, J. Zhou, and Q. Fu, “Jointly using low-rank and sparsity priors for sparse inverse synthetic aperture radar imaging,” IEEE Transactions on Image Processing, vol. 29, pp. 100–115, 2020.
  57. K.-H. Liu, A. Wiesel, and D. C. Munson, “Synthetic aperture radar autofocus based on a bilinear model,” IEEE Transactions on Image Processing, vol. 21, no. 5, pp. 2735–2746, 2012.
  58. B. Fei, W. Yang, W.-M. Chen, Z. Li, Y. Li, T. Ma, X. Hu, and L. Ma, “Comprehensive review of deep learning-based 3d point cloud completion processing and analysis,” IEEE Transactions on Intelligent Transportation Systems, 2022.
  59. S. Zhang, X. Li, M. Zong, X. Zhu, and D. Cheng, “Learning k for knn classification,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 8, no. 3, pp. 1–19, 2017.
  60. Q. Li, R. Li, K. Ji, and W. Dai, “Kalman filter and its application,” in 2015 8th International Conference on Intelligent Networks and Intelligent Systems (ICINIS).   IEEE, 2015, pp. 74–77.
  61. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision (ICCV), 2017, pp. 2980–2988.
  62. B. Xie, D. Ganesan, and J. Xiong, “Embracing lora sensing with device mobility,” in Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems (SenSys), 2022, pp. 349–361.
  63. L. Leishen Intelligent Systems Co. (2023) Time of flight principle. [Online]. Available: https://www.leishen-lidar.com/tof/158
  64. Stereolabs. (2023) ZED 2i Industrial AI Stereo Camera. [Online]. Available: https://www.stereolabs.com/zed-2i/
  65. L. Zheng, Z. Ma, X. Zhu, B. Tan, S. Li, K. Long, W. Sun, S. Chen, L. Zhang, M. Wan, L. Huang, and J. Bai, “Tj4dradset: A 4d radar dataset for autonomous driving,” in The Proceedings of the IEEE International Conference on Intelligent Transportation Systems (ITSC), 2022, pp. 493–498.
  66. D. Barnes, M. Gadd, P. Murcutt, P. Newman, and I. Posner, “The oxford radar robotcar dataset: A radar extension to the oxford robotcar dataset,” in IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 6433–6438.
  67. F. Kraus, N. Scheiner, W. Ritter, and K. Dietmayer, “The radar ghost dataset–an evaluation of ghost objects in automotive radar data,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 8570–8577.
  68. A. Zhang, F. E. Nowruzi, and R. Laganiere, “Raddet: Range-azimuth-doppler based radar object detection for dynamic road users,” in Proceedings of the IEEE Conference on Robots and Vision (CRV).   IEEE, 2021, pp. 95–102.
  69. Y. Wang, Z. Jiang, Y. Li, J.-N. Hwang, G. Xing, and H. Liu, “Rodnet: A real-time radar object detection network cross-supervised by camera-radar fused object 3d localization,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 4, pp. 954–967, 2021.
  70. C. X. Lu, M. R. U. Saputra, P. Zhao, Y. Almalioglu, P. P. De Gusmao, C. Chen, K. Sun, N. Trigoni, and A. Markham, “milliego: single-chip mmwave radar aided egomotion estimation via deep sensor fusion,” in Proceedings of the 18th Conference on Embedded Networked Sensor Systems, 2020, pp. 109–122.
  71. T. Wu, L. Pan, J. Zhang, T. Wang, Z. Liu, and D. Lin, “Balanced chamfer distance as a comprehensive metric for point cloud completion,” Advances in Neural Information Processing Systems, vol. 34, pp. 29 088–29 100, 2021.
  72. Y. Rubner, C. Tomasi, and L. J. Guibas, “The earth mover’s distance as a metric for image retrieval,” International journal of computer vision, vol. 40, no. 2, p. 99, 2000.
  73. M. Tatarchenko, S. R. Richter, R. Ranftl, Z. Li, V. Koltun, and T. Brox, “What do single-view 3d reconstruction networks learn?” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3405–3414.
  74. R. Schmidt, “Multiple emitter location and signal parameter estimation,” IEEE transactions on antennas and propagation, vol. 34, no. 3, pp. 276–280, 1986.
  75. I. Loshchilov and F. Hutter, “Sgdr: Stochastic gradient descent with warm restarts,” arXiv preprint arXiv:1608.03983, 2016.
  76. X. Ying, “An overview of overfitting and its solutions,” in Journal of physics: Conference series, vol. 1168.   IOP Publishing, 2019, p. 022022.
  77. H. Balta, J. Velagic, W. Bosschaerts, G. De Cubber, and B. Siciliano, “Fast statistical outlier removal based method for large 3d point clouds of outdoor environments,” IFAC-PapersOnLine, vol. 51, no. 22, pp. 348–353, 2018.
  78. M. Chghaf, S. Rodriguez, and A. E. Ouardi, “Camera, lidar and multi-modal slam systems for autonomous ground vehicles: a survey,” Journal of Intelligent & Robotic Systems, vol. 105, no. 1, p. 2, 2022.
  79. C. Dinesh, G. Cheung, and I. V. Bajić, “Point cloud video super-resolution via partial point coupling and graph smoothness,” IEEE Transactions on Image Processing, vol. 31, pp. 4117–4132, 2022.
  80. R. Li, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng, “Pu-gan: a point cloud upsampling adversarial network,” in Proceedings of the IEEE/CVF international conference on computer vision (ICCV), 2019, pp. 7203–7212.
  81. M. Sarmad, H. J. Lee, and Y. M. Kim, “Rl-gan-net: A reinforcement learning agent controlled gan network for real-time point cloud shape completion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 5898–5907.
  82. S. Luo and W. Hu, “Diffusion probabilistic models for 3d point cloud generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 2837–2845.
  83. Z. Lyu, Z. Kong, X. Xu, L. Pan, and D. Lin, “A conditional point diffusion-refinement paradigm for 3d point cloud completion,” arXiv preprint arXiv:2112.03530, 2021.
  84. A. Vergnano, D. Franco, and A. Godio, “Drone-borne ground-penetrating radar for snow cover mapping,” Remote Sensing, vol. 14, no. 7, p. 1763, 2022.
Citations (6)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com