Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optical Flow Based Detection and Tracking of Moving Objects for Autonomous Vehicles (2403.17779v1)

Published 26 Mar 2024 in cs.RO, cs.SY, and eess.SY

Abstract: Accurate velocity estimation of surrounding moving objects and their trajectories are critical elements of perception systems in Automated/Autonomous Vehicles (AVs) with a direct impact on their safety. These are non-trivial problems due to the diverse types and sizes of such objects and their dynamic and random behaviour. Recent point cloud based solutions often use Iterative Closest Point (ICP) techniques, which are known to have certain limitations. For example, their computational costs are high due to their iterative nature, and their estimation error often deteriorates as the relative velocities of the target objects increase (>2 m/sec). Motivated by such shortcomings, this paper first proposes a novel Detection and Tracking of Moving Objects (DATMO) for AVs based on an optical flow technique, which is proven to be computationally efficient and highly accurate for such problems. \textcolor{black}{This is achieved by representing the driving scenario as a vector field and applying vector calculus theories to ensure spatiotemporal continuity.} We also report the results of a comprehensive performance evaluation of the proposed DATMO technique, carried out in this study using synthetic and real-world data. The results of this study demonstrate the superiority of the proposed technique, compared to the DATMO techniques in the literature, in terms of estimation accuracy and processing time in a wide range of relative velocities of moving objects. Finally, we evaluate and discuss the sensitivity of the estimation error of the proposed DATMO technique to various system and environmental parameters, as well as the relative velocities of the moving objects.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. M. Sualeh and G.-W. Kim, “Dynamic multi-lidar based multiple object detection and tracking,” Sensors, vol. 19, no. 6, p. 1474, 2019.
  2. M. Y. Abbass, K.-C. Kwon, N. Kim, S. A. Abdelwahab, F. E. A. El-Samie, and A. A. Khalaf, “A survey on online learning for visual tracking,” The Visual Computer, vol. 37, no. 5, pp. 993–1014, 2021.
  3. C. Premachandra, S. Ueda, and Y. Suzuki, “Detection and tracking of moving objects at road intersections using a 360-degree camera for driver assistance and automated driving,” IEEE Access, vol. 8, pp. 135 652–135 660, 2020.
  4. S. Sivaraman and M. M. Trivedi, “Looking at vehicles on the road: A survey of vision-based vehicle detection, tracking, and behavior analysis,” IEEE transactions on intelligent transportation systems, vol. 14, no. 4, pp. 1773–1795, 2013.
  5. M. Kusenbach, M. Himmelsbach, and H.-J. Wuensche, “A new geometric 3d lidar feature for model creation and classification of moving objects,” in 2016 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2016, pp. 272–278.
  6. A. Börcs, B. Nagy, and C. Benedek, “Instant object detection in lidar point clouds,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 7, pp. 992–996, 2017.
  7. S. Steyer, G. Tanzmeister, and D. Wollherr, “Grid-based environment estimation using evidential mapping and particle tracking,” IEEE Transactions on Intelligent Vehicles, vol. 3, no. 3, pp. 384–396, 2018.
  8. M. C. Hutchison, J. A. Pautler, and M. A. Smith, “Traffic light signal system using radar-based target detection and tracking,” Oct. 26 2010, uS Patent 7,821,422.
  9. Y. Ye, L. Fu, and B. Li, “Object detection and tracking using multi-layer laser for autonomous urban driving,” in 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC).   IEEE, 2016, pp. 259–264.
  10. L. Spinello, R. Triebel, and R. Siegwart, “Multiclass multimodal detection and tracking in urban environments,” The International Journal of Robotics Research, vol. 29, no. 12, pp. 1498–1515, 2010.
  11. B. Douillard, D. Fox, F. Ramos et al., “Laser and vision based outdoor object mapping.” in Robotics: Science and Systems, vol. 8, 2008.
  12. M. Himmelsbach, A. Mueller, T. Lüttel, and H.-J. Wünsche, “Lidar-based 3d object perception,” in Proceedings of 1st international workshop on cognition for technical systems, vol. 1, 2008.
  13. S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li, “Pv-rcnn: Point-voxel feature set abstraction for 3d object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10 529–10 538.
  14. H. Wang and B. Liu, “Detection and tracking dynamic vehicles for autonomous driving based on 2-d point scans,” IEEE Systems Journal, 2022.
  15. A. Petrovskaya and S. Thrun, “Model based vehicle detection and tracking for autonomous urban driving,” Autonomous Robots, vol. 26, no. 2, pp. 123–139, 2009.
  16. J. An and E. Kim, “Novel vehicle bounding box tracking using a low-end 3d laser scanner,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 6, pp. 3403–3419, 2020.
  17. S. Steyer, C. Lenk, D. Kellner, G. Tanzmeister, and D. Wollherr, “Grid-based object tracking with nonlinear dynamic state and shape estimation,” IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 7, pp. 2874–2893, 2019.
  18. X. Zhang, W. Xu, C. Dong, and J. M. Dolan, “Efficient l-shape fitting for vehicle detection using laser scanners,” in 2017 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2017, pp. 54–59.
  19. A. Asvadi, P. Peixoto, and U. Nunes, “Detection and tracking of moving objects using 2.5 d motion grids,” in 2015 IEEE 18th International Conference on Intelligent Transportation Systems.   IEEE, 2015, pp. 788–793.
  20. R. Kaestner, J. Maye, Y. Pilat, and R. Siegwart, “Generative object detection and tracking in 3d range data,” in 2012 IEEE International Conference on Robotics and Automation.   IEEE, 2012, pp. 3075–3081.
  21. H. Lee, J. Yoon, Y. Jeong, and K. Yi, “Moving object detection and tracking based on interaction of static obstacle map and geometric model-free approachfor urban autonomous driving,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 6, pp. 3275–3284, 2020.
  22. H. Lee, H. Lee, D. Shin, and K. Yi, “Moving objects tracking based on geometric model-free approach with particle filter using automotive lidar,” IEEE Transactions on Intelligent Transportation Systems, 2022.
  23. F. Moosmann and C. Stiller, “Joint self-localization and tracking of generic objects in 3d range data,” in 2013 IEEE International Conference on Robotics and Automation.   IEEE, 2013, pp. 1146–1152.
  24. A. Dewan, T. Caselitz, G. D. Tipaldi, and W. Burgard, “Motion-based detection and tracking in 3d lidar scans,” in 2016 IEEE international conference on robotics and automation (ICRA).   IEEE, 2016, pp. 4508–4513.
  25. J. Groß, A. Ošep, and B. Leibe, “Alignnet-3d: Fast point cloud registration of partially observed objects,” in 2019 International Conference on 3D Vision (3DV).   IEEE, 2019, pp. 623–632.
  26. J. Kim, H. Lee, and K. Yi, “Online static probability map and odometry estimation using automotive lidar for urban autonomous driving,” in 2021 IEEE International Intelligent Transportation Systems Conference (ITSC).   IEEE, 2021, pp. 2674–2681.
  27. J. Li, X. Huang, and J. Zhan, “High-precision motion detection and tracking based on point cloud registration and radius search,” IEEE Transactions on Intelligent Transportation Systems, 2023.
  28. E. Arnold, S. Mozaffari, and M. Dianati, “Fast and robust registration of partially overlapping point clouds,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1502–1509, 2021.
  29. G. Farnebäck, “Two-frame motion estimation based on polynomial expansion,” in Scandinavian conference on Image analysis.   Springer, 2003, pp. 363–370.
  30. A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
  31. A. Petrovskaya, M. Perrollaz, L. Oliveira, L. Spinello, R. Triebel, A. Makris, J.-D. Yoder, C. Laugier, U. Nunes, and P. Bessière, “Awareness of road scene participants for autonomous driving,” Handbook of Intelligent Vehicles, pp. 1383–1432, 2012.
  32. P. J. Besl and N. D. McKay, “Method for registration of 3-d shapes,” in Sensor fusion IV: control paradigms and data structures, vol. 1611.   Spie, 1992, pp. 586–606.
  33. D. Fortun, P. Bouthemy, and C. Kervrann, “Optical flow modeling and computation: A survey,” Computer Vision and Image Understanding, vol. 134, pp. 1–21, 2015.
  34. J. Tanaś and A. Kotyra, “Comparison of optical flow algorithms performance on flame image sequences,” in Photonics Applications in Astronomy, Communications, Industry, and High Energy Physics Experiments 2017, vol. 10445.   SPIE, 2017, pp. 243–249.
  35. J. Casey, “A treatment of rigid body dynamics,” Journal of Applied Mechanics, vol. 50, pp. 905–907, 1983.
  36. H. Cho, Y.-W. Seo, B. V. Kumar, and R. R. Rajkumar, “A multi-sensor fusion system for moving object detection and tracking in urban driving environments,” in 2014 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2014, pp. 1836–1843.
  37. Q. Wang, J. Chen, J. Deng, X. Zhang, and K. Zhang, “Simultaneous pose estimation and velocity estimation of an ego vehicle and moving obstacles using lidar information only,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 8, pp. 12 121–12 132, 2021.
  38. X. Liu, C. R. Qi, and L. J. Guibas, “Flownet3d: Learning scene flow in 3d point clouds,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 529–537.
  39. A. Medhekar, V. Chiluka, and A. Patait, “Accelerate opencv: Optical flow algorithms with nvidia turing gpus,” https://developer.nvidia.com/blog/opencv-optical-flow-algorithms-with-nvidia-turing-gpus/, accessed: 2019-12-05.
Citations (2)

Summary

We haven't generated a summary for this paper yet.