Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
140 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

WiDEVIEW: An UltraWideBand and Vision Dataset for Deciphering Pedestrian-Vehicle Interactions (2309.16057v1)

Published 27 Sep 2023 in cs.RO

Abstract: Robust and accurate tracking and localization of road users like pedestrians and cyclists is crucial to ensure safe and effective navigation of Autonomous Vehicles (AVs), particularly so in urban driving scenarios with complex vehicle-pedestrian interactions. Existing datasets that are useful to investigate vehicle-pedestrian interactions are mostly image-centric and thus vulnerable to vision failures. In this paper, we investigate Ultra-wideband (UWB) as an additional modality for road users' localization to enable a better understanding of vehicle-pedestrian interactions. We present WiDEVIEW, the first multimodal dataset that integrates LiDAR, three RGB cameras, GPS/IMU, and UWB sensors for capturing vehicle-pedestrian interactions in an urban autonomous driving scenario. Ground truth image annotations are provided in the form of 2D bounding boxes and the dataset is evaluated on standard 2D object detection and tracking algorithms. The feasibility of UWB is evaluated for typical traffic scenarios in both line-of-sight and non-line-of-sight conditions using LiDAR as ground truth. We establish that UWB range data has comparable accuracy with LiDAR with an error of 0.19 meters and reliable anchor-tag range data for up to 40 meters in line-of-sight conditions. UWB performance for non-line-of-sight conditions is subjective to the nature of the obstruction (trees vs. buildings). Further, we provide a qualitative analysis of UWB performance for scenarios susceptible to intermittent vision failures. The dataset can be downloaded via https://github.com/unmannedlab/UWB_Dataset.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. A. Rasouli, I. Kotseruba, and J. K. Tsotsos, “Are they going to cross? a benchmark dataset and baseline for pedestrian crosswalk behavior,” in ICCVW, 2017, pp. 206–213.
  2. ——, “It’s not all about size: On the role of data properties in pedestrian detection,” in ECCVW, 2018.
  3. T. Chen, R. Tian, Y. Chen, J. Domeyer, H. Toyoda, R. Sherony, T. Jing, and Z. Ding, “Psi: A pedestrian behavior dataset for socially intelligent autonomous car,” arXiv preprint arXiv:2112.02604, 2021.
  4. B. Liu, E. Adeli, Z. Cao, K.-H. Lee, A. Shenoi, A. Gaidon, and J. C. Niebles, “Spatiotemporal relationship reasoning for pedestrian intent prediction,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3485–3492, 2020.
  5. W. Kim, M. S. Ramanagopal, C. Barto, M.-Y. Yu, K. Rosaen, N. Goumas, R. Vasudevan, and M. Johnson-Roberson, “Pedx: Benchmark dataset for metric 3-d pose estimation of pedestrians in complex urban intersections,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1940–1947, 2019.
  6. A. Rasouli, I. Kotseruba, T. Kunic, and J. K. Tsotsos, “Pie: A large-scale dataset and models for pedestrian intention estimation and trajectory prediction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
  7. A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
  8. H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11 621–11 631.
  9. P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, V. Vasudevan, W. Han, J. Ngiam, H. Zhao, A. Timofeev, S. Ettinger, M. Krivokon, A. Gao, A. Joshi, Y. Zhang, J. Shlens, Z. Chen, and D. Anguelov, “Scalability in perception for autonomous driving: Waymo open dataset,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  10. G. J. Brostow, J. Fauqueur, and R. Cipolla, “Semantic object classes in video: A high-definition ground truth database,” Pattern Recognition Letters, vol. 30, no. 2, pp. 88–97, 2009.
  11. G. Neuhold, T. Ollmann, S. R. Bulò, and P. Kontschieder, “The mapillary vistas dataset for semantic understanding of street scenes,” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 5000–5009, 2017.
  12. F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, and T. Darrell, “Bdd100k: A diverse driving video database with scalable annotation tooling,” ArXiv, vol. abs/1805.04687, 2018.
  13. Y. Choi, N. Kim, S. Hwang, K. Park, J. S. Yoon, K. An, and I. S. Kweon, “Kaist multi-spectral day/night data set for autonomous and assisted driving,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 3, pp. 934–948, 2018.
  14. X. Huang, P. Wang, X. Cheng, D. Zhou, Q. Geng, and R. Yang, “The apolloscape open dataset for autonomous driving and its application,” IEEE transactions on pattern analysis and machine intelligence, vol. 42, no. 10, pp. 2702–2719, 2019.
  15. A. Patil, S. Malla, H. Gang, and Y.-T. Chen, “The h3d dataset for full-surround 3d multi-object detection and tracking in crowded urban scenes,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 9552–9557.
  16. J. Houston, G. Zuidhof, L. Bergamini, Y. Ye, L. Chen, A. Jain, S. Omari, V. Iglovikov, and P. Ondruska, “One thousand and one hours: Self-driving motion prediction dataset,” in Conference on Robot Learning.   PMLR, 2021, pp. 409–418.
  17. S. Malla, B. Dariush, and C. Choi, “Titan: Future forecast using action priors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 186–11 196.
  18. X. Zhang, Z. Li, Y. Gong, D. Jin, J. Li, L. Wang, Y. Zhu, and H. Liu, “Openmpd: An open multimodal perception dataset for autonomous driving,” IEEE Transactions on Vehicular Technology, vol. 71, no. 3, pp. 2437–2447, 2022.
  19. S. Sun, J. Hu, J. Li, R. Liu, M. Shu, and Y. Yang, “An ins-uwb based collision avoidance system for agv,” Algorithms, vol. 12, no. 2, p. 40, 2019.
  20. E. Ghanem, K. O’Keefe, and R. Klukas, “Testing vehicle-to-vehicle relative position and attitude estimation using multiple uwb ranging,” in 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall).   IEEE, 2020, pp. 1–5.
  21. T.-M. Nguyen, T. H. Nguyen, M. Cao, Z. Qiu, and L. Xie, “Integrated uwb-vision approach for autonomous docking of uavs in gps-denied environments,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 9603–9609.
  22. W. Shule, C. M. Almansa, J. P. Queralta, Z. Zou, and T. Westerlund, “Uwb-based localization for multi-uav systems and collaborative heterogeneous multi-robot systems,” Procedia Computer Science, vol. 175, pp. 357–364, 2020.
  23. J. Hartzer and S. Saripalli, “Vehicular teamwork: Collaborative localization of autonomous vehicles,” in 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), 2021, pp. 1077–1082.
  24. U. Raza, A. Khan, R. Kou, T. Farnham, T. Premalal, A. Stanoev, and W. Thompson, “Dataset: Indoor localization with narrow-band, ultra-wideband, and motion capture systems,” in Proceedings of the 2nd Workshop on Data Acquisition to Analysis, 2019, pp. 34–36.
  25. J. P. Queralta, C. M. Almansa, F. Schiano, D. Floreano, and T. Westerlund, “Uwb-based system for uav localization in gnss-denied environments: Characterization and dataset,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 4521–4528.
  26. T.-M. Nguyen, S. Yuan, M. Cao, Y. Lyu, T. H. Nguyen, and L. Xie, “Ntu viral: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint,” The International Journal of Robotics Research, vol. 41, no. 3, pp. 270–280, 2022.
  27. [Online]. Available: https://www.qorvo.com/products/p/DW1000#documents
  28. J. Maye, P. Furgale, and R. Siegwart, “Self-supervised calibration for robotic systems,” in 2013 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2013, pp. 473–480.
  29. A. H. Jonathon Luiten, “Trackeval,” https://github.com/JonathonLuiten/TrackEval, 2020.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com