Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

EdgeLoc: A Communication-Adaptive Parallel System for Real-Time Localization in Infrastructure-Assisted Autonomous Driving (2405.12120v2)

Published 20 May 2024 in cs.DC and cs.NI

Abstract: This paper presents EdgeLoc, an infrastructure-assisted, real-time localization system for autonomous driving that addresses the incompatibility between traditional localization methods and deep learning approaches. The system is built on top of the Robot Operating System (ROS) and combines the real-time performance of traditional methods with the high accuracy of deep learning approaches. The system leverages edge computing capabilities of roadside units (RSUs) for precise localization to enhance on-vehicle localization that is based on the real-time visual odometry. EdgeLoc is a parallel processing system, utilizing a proposed uncertainty-aware pose fusion solution. It achieves communication adaptivity through online learning and addresses fluctuations via window-based detection. Moreover, it achieves optimal latency and maximum improvement by utilizing auto-splitting vehicle-infrastructure collaborative inference, as well as online distribution learning for decision-making. Even with the most basic end-to-end deep neural network for localization estimation, EdgeLoc realizes a 67.75\% reduction in the localization error for real-time local visual odometry, a 29.95\% reduction for non-real-time collaborative inference, and a 30.26\% reduction compared to Kalman filtering. Finally, accuracy-to-latency conversion was experimentally validated, and an overall experiment was conducted on a practical cellular network. The system is open sourced at https://github.com/LoganCome/EdgeAssistedLocalization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. A. Woo, B. Fidan, and W. W. Melek, “Localization for autonomous driving,” Handbook of position location: theory, practice, and advances, second edition, pp. 1051–1087, 2018.
  2. F. Fraundorfer and D. Scaramuzza, “Visual odometry: Part ii: Matching, robustness, optimization, and applications,” IEEE Robotics & Automation Magazine, vol. 19, no. 2, pp. 78–90, 2012.
  3. T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
  4. H. Zhan, C. S. Weerasekera, J.-W. Bian, and I. Reid, “Visual odometry revisited: What should be learnt?,” in 2020 IEEE international conference on robotics and automation (ICRA), pp. 4203–4210, IEEE, 2020.
  5. T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping,” in 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 5135–5142, IEEE, 2020.
  6. L. R. Agostinho, N. M. Ricardo, M. I. Pereira, A. Hiolle, and A. M. Pinto, “A practical survey on visual odometry for autonomous driving in challenging scenarios and conditions,” IEEE Access, vol. 10, pp. 72182–72205, 2022.
  7. L. de Paula Veronese, F. Auat-Cheein, F. Mutz, T. Oliveira-Santos, J. E. Guivant, E. De Aguiar, C. Badue, and A. F. De Souza, “Evaluating the limits of a lidar for an autonomous driving localization,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 3, pp. 1449–1458, 2020.
  8. A. Kendall, M. Grimes, and R. Cipolla, “Posenet: A convolutional network for real-time 6-dof camera relocalization,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2938–2946, 2015.
  9. Y. Zhao, J. Li, X. Hu, and H. Sun, “Gnss spoofing detection and mitigation based on maximum likelihood estimation,” IEEE Transactions on Aerospace and Electronic Systems, vol. 57, no. 3, pp. 1817–1831, 2021.
  10. C. Jiang, G. Yang, J. Wang, and X. Zhang, “An improved ins/gnss integrated navigation method based on adaptive robust kalman filter,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–12, 2022.
  11. A. Kendall and R. Cipolla, “Geometric loss functions for camera pose regression with deep learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5974–5983, 2017.
  12. A. Kendall and Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?,” in Advances in Neural Information Processing Systems (NeurIPS), pp. 5574–5584, 2017.
  13. F. Walch, C. Hazirbas, L. Leal-Taixe, T. Sattler, S. Hilsenbeck, and D. Cremers, “Image-based localization using lstms for structured feature correlation,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 627–637, 2017.
  14. I. Melekhov, J. Ylioinas, J. Kannala, and E. Rahtu, “Image-based localization using hourglass networks,” in Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 870–877, 2017.
  15. S. Brahmbhatt, J. Gu, K. Kim, J. Hays, and J. Kautz, “Geometry-aware learning of maps for camera localization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2616–2625, 2018.
  16. G. Lu, Y. Zhao, H. Sun, and J. Li, “Gnss multipath error modeling and mitigation in urban areas using 3d city models,” GPS Solutions, vol. 26, no. 1, pp. 1–14, 2022.
  17. C. Creß, Z. Bing, and A. C. Knoll, “Intelligent transportation systems using roadside infrastructure: A literature survey,” IEEE Transactions on Intelligent Transportation Systems, 2023.
  18. B. Liu, L. Wang, X. Chen, L. Huang, D. Han, and C.-Z. Xu, “Peer-assisted robotic learning: a data-driven collaborative learning approach for cloud robotic systems,” in 2021 IEEE international conference on robotics and automation (ICRA), pp. 4062–4070, IEEE, 2021.
  19. B. Liu, L. Wang, M. Liu, and C.-Z. Xu, “Federated imitation learning: A novel framework for cloud robotic systems with heterogeneous sensor data,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3509–3516, 2019.
  20. Y. He, L. Ma, Z. Jiang, Y. Tang, and G. Xing, “Vi-eye: semantic-based 3d point cloud registration for infrastructure-assisted autonomous driving,” in Proceedings of the 27th Annual International Conference on Mobile Computing and Networking, pp. 573–586, 2021.
  21. G. Liu, S. Salehi, E. Bala, C.-C. Shen, and L. J. Cimini, “Communication-constrained routing and traffic control: A framework for infrastructure-assisted autonomous vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 12, pp. 23844–23857, 2022.
  22. B. Liu, L. Wang, and M. Liu, “Roboec2: A novel cloud robotic system with dynamic network offloading assisted by amazon ec2,” IEEE Transactions on Automation Science and Engineering, 2023.
  23. D. Callegaro, S. Baidya, and M. Levorato, “Dynamic distributed computing for infrastructure-assisted autonomous uavs,” in ICC 2020-2020 IEEE International Conference on Communications (ICC), pp. 1–6, IEEE, 2020.
  24. B. Liu, L. Wang, M. Liu, and C.-Z. Xu, “Lifelong federated reinforcement learning: a learning architecture for navigation in cloud robotic systems,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 4555–4562, 2019.
  25. S. Shi, J. Cui, Z. Jiang, Z. Yan, G. Xing, J. Niu, and Z. Ouyang, “Vips: Real-time perception fusion for infrastructure-assisted autonomous driving,” in Proceedings of the 28th Annual International Conference on Mobile Computing And Networking, pp. 133–146, 2022.
  26. S. Yang, H. H. Yin, R. W. Yeung, X. Xiong, Y. Huang, L. Ma, M. Li, and C. Tang, “On scalable network communication for infrastructure-vehicle collaborative autonomous driving,” IEEE Open Journal of Vehicular Technology, vol. 4, pp. 310–324, 2022.
  27. Y. Jia, J. Zhang, S. Lu, B. Fan, R. Mao, S. Zhou, and Z. Niu, “Infrastructure-assisted collaborative perception in automated valet parking: A safety perspective,” arXiv preprint arXiv:2403.15156, 2024.
  28. M. Tsukada, T. Oi, M. Kitazawa, and H. Esaki, “Networked roadside perception units for autonomous driving,” Sensors, vol. 20, no. 18, p. 5320, 2020.
  29. M. Hirata, M. Tsukada, K. Okumura, Y. Tamura, H. Ochiai, and X. Défago, “Roadside-assisted cooperative planning using future path sharing for autonomous driving,” in 2021 IEEE 94th Vehicular Technology Conference (VTC2021-Fall), pp. 1–7, IEEE, 2021.
  30. L. Zhou, “A survey on contextual multi-armed bandits,” arXiv preprint arXiv:1508.03326, 2015.
  31. D. Simon, “Kalman filtering,” Embedded systems programming, vol. 14, no. 6, pp. 72–79, 2001.
  32. W. Maddern, G. Pascoe, C. Linegar, and P. Newman, “1 year, 1000 km: The oxford robotcar dataset,” The International Journal of Robotics Research, vol. 36, no. 1, pp. 3–15, 2017.
  33. N. Nikaein, M. K. Marina, S. Manickam, A. Dawson, R. Knopp, and C. Bonnet, “Openairinterface: A flexible platform for 5g research,” ACM SIGCOMM Computer Communication Review, vol. 44, no. 5, pp. 33–38, 2014.
  34. magnific0, “wondershaper.” https://github.com/magnific0/wondershaper, 2024.
  35. P. Auer, N. Cesa-Bianchi, and P. Fischer, “Finite-time analysis of the multiarmed bandit problem,” Machine learning, vol. 47, no. 2, pp. 235–256, 2002.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets