Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Perception Without Vision for Trajectory Prediction: Ego Vehicle Dynamics as Scene Representation for Efficient Active Learning in Autonomous Driving (2405.09049v2)

Published 15 May 2024 in cs.LG, cs.AI, cs.CV, and cs.RO

Abstract: This study investigates the use of trajectory and dynamic state information for efficient data curation in autonomous driving machine learning tasks. We propose methods for clustering trajectory-states and sampling strategies in an active learning framework, aiming to reduce annotation and data costs while maintaining model performance. Our approach leverages trajectory information to guide data selection, promoting diversity in the training data. We demonstrate the effectiveness of our methods on the trajectory prediction task using the nuScenes dataset, showing consistent performance gains over random sampling across different data pool sizes, and even reaching sub-baseline displacement errors at just 50% of the data cost. Our results suggest that sampling typical data initially helps overcome the ''cold start problem,'' while introducing novelty becomes more beneficial as the training pool size increases. By integrating trajectory-state-informed active learning, we demonstrate that more efficient and robust autonomous driving systems are possible and practical using low-cost data curation strategies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. H. Cui, V. Radosavljevic, F.-C. Chou, T.-H. Lin, T. Nguyen, T.-K. Huang, J. Schneider, and N. Djuric, “Multimodal trajectory predictions for autonomous driving using deep convolutional networks,” in 2019 international conference on robotics and automation (icra).   IEEE, 2019, pp. 2090–2096.
  2. K. Messaoud, N. Deo, M. M. Trivedi, and F. Nashashibi, “Trajectory prediction for autonomous driving based on multi-head attention with joint agent-map representation,” in 2021 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2021, pp. 165–170.
  3. S. Kim, H. Jeon, J. W. Choi, and D. Kum, “Diverse multiple trajectory prediction using a two-stage prediction network trained with lane loss,” IEEE Robotics and Automation Letters, vol. 8, no. 4, pp. 2038–2045, 2022.
  4. R. Greer, N. Deo, and M. Trivedi, “Trajectory prediction in autonomous driving with a lane heading auxiliary loss,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 4907–4914, 2021.
  5. N. Deo and M. M. Trivedi, “Convolutional social pooling for vehicle trajectory prediction,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 1468–1476.
  6. ——, “Trajectory forecasts in unknown environments conditioned on grid-based plans,” arXiv preprint arXiv:2001.00735, 2020.
  7. W. Zimmer, A. Rangesh, and M. Trivedi, “3d bat: A semi-automatic, web-based 3d annotation toolbox for full-surround, multi-modal data streams,” in 2019 IEEE Intelligent Vehicles Symposium (IV).   IEEE, 2019, pp. 1816–1821.
  8. R. Greer, B. Antoniussen, M. V. Andersen, A. Møgelmose, and M. M. Trivedi, “The why, when, and how to use active learning in large-data-driven 3d object detection for safe autonomous driving: An empirical exploration,” arXiv preprint arXiv:2401.16634, 2024.
  9. J. Rückin, F. Magistri, C. Stachniss, and M. Popović, “Semi-supervised active learning for semantic segmentation in unknown environments using informative path planning,” IEEE Robotics and Automation Letters, 2024.
  10. A. Almin, L. Lemarié, A. Duong, and B. R. Kiran, “Navya3dseg-navya 3d semantic segmentation dataset design & split generation for autonomous vehicles,” IEEE Robotics and Automation Letters, 2023.
  11. A. Ghita, B. Antoniussen, W. Zimmer, R. Greer, C. Creß, A. Møgelmose, M. M. Trivedi, and A. C. Knoll, “Activeanno3d–an active learning framework for multi-modal 3d object detection,” arXiv preprint arXiv:2402.03235, 2024.
  12. O. Sener and S. Savarese, “Active learning for convolutional neural networks: A core-set approach,” arXiv preprint arXiv:1708.00489, 2017.
  13. Y. Yang, Z. Ma, F. Nie, X. Chang, and A. G. Hauptmann, “Multi-class active learning by uncertainty sampling with diversity maximization,” International Journal of Computer Vision, vol. 113, pp. 113–127, 2015.
  14. H. Lu, X. Jia, Y. Xie, W. Liao, X. Yang, and J. Yan, “Activead: Planning-oriented active learning for end-to-end autonomous driving,” arXiv preprint arXiv:2403.02877, 2024.
  15. S. Sivaraman and M. M. Trivedi, “A general active-learning framework for on-road vehicle recognition and tracking,” IEEE Transactions on intelligent transportation systems, vol. 11, no. 2, pp. 267–276, 2010.
  16. ——, “Active learning for on-road vehicle detection: A comparative study,” Machine vision and applications, vol. 25, no. 3, pp. 599–611, 2014.
  17. R. K. Satzoda and M. M. Trivedi, “Multipart vehicle detection using symmetry-derived analysis and active learning,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 926–937, 2015.
  18. G. Hacohen, A. Dekel, and D. Weinshall, “Active learning on a budget: Opposite strategies suit high and low budgets,” arXiv preprint arXiv:2202.02794, 2022.
  19. H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11 621–11 631.
  20. M. Rottmann and M. Reese, “Automated detection of label errors in semantic segmentation datasets via deep learning and uncertainty quantification,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 3214–3223.
  21. N. Kulkarni, A. Rangesh, J. Buck, J. Feltracco, M. Trivedi, N. Deo, R. Greer, S. Sarraf, and S. Sathyanarayana, “Create a large-scale video driving dataset with detailed attributes using amazon sagemaker ground truth,” 2021.
  22. Y. Zhu, J. Lin, S. He, B. Wang, Z. Guan, H. Liu, and D. Cai, “Addressing the item cold-start problem by attribute-driven active learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 4, pp. 631–644, 2019.
  23. R. K. Satzoda and M. M. Trivedi, “Drive analysis using vehicle dynamics and vision-based lane semantics,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 1, pp. 9–18, 2014.
  24. M. Van Ly, S. Martin, and M. M. Trivedi, “Driver classification and driving style recognition using inertial sensors,” in 2013 IEEE intelligent vehicles symposium (IV).   IEEE, 2013, pp. 1040–1045.
  25. K. Rösch, F. Heidecker, J. Truetsch, K. Kowol, C. Schicktanz, M. Bieshaare, B. Sick, and C. Stiller, “Space, time, and interaction: A taxonomy of corner cases in trajectory datasets for automated driving,” in 2022 IEEE Symposium Series on Computational Intelligence (SSCI).   IEEE, 2022, pp. 86–93.
  26. D. Müllner, “Modern hierarchical, agglomerative clustering algorithms,” arXiv preprint arXiv:1109.2378, 2011.
  27. Z. Bar-Joseph, D. K. Gifford, and T. S. Jaakkola, “Fast optimal leaf ordering for hierarchical clustering,” Bioinformatics, vol. 17, no. suppl_1, pp. S22–S29, 2001.
  28. N. Deo, E. Wolff, and O. Beijbom, “Multimodal trajectory prediction conditioned on lane-graph traversals,” in Conference on Robot Learning.   PMLR, 2022, pp. 203–212.
  29. R. Greer, J. Isa, N. Deo, A. Rangesh, and M. M. Trivedi, “On salience-sensitive sign classification in autonomous vehicle path planning: Experimental explorations with a novel dataset,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 636–644.
  30. R. Greer, A. Gopalkrishnan, N. Deo, A. Rangesh, and M. Trivedi, “Salient sign detection in safe autonomous driving: Ai which reasons over full visual context,” in 27th International Technical Conference on the Enhanced Safety of Vehicles (ESV) National Highway Traffic Safety Administration, no. 23-0333, 2023.
  31. R. Greer, A. Gopalkrishnan, J. Landgren, L. Rakla, A. Gopalan, and M. Trivedi, “Robust traffic light detection using salience-sensitive loss: Computational framework and evaluations,” in 2023 IEEE Intelligent Vehicles Symposium (IV), 2023, pp. 1–7.
  32. A. Gopalkrishnan, R. Greer, and M. Trivedi, “Multi-frame, lightweight & efficient vision-language models for question answering in autonomous driving,” arXiv preprint arXiv:2403.19838, 2024.
  33. E. Ohn-Bar, A. Prakash, A. Behl, K. Chitta, and A. Geiger, “Learning situational driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 296–11 305.
  34. A. Prakash, A. Behl, E. Ohn-Bar, K. Chitta, and A. Geiger, “Exploring data aggregation in policy learning for vision-based urban autonomous driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 763–11 773.
  35. A. Hekimoglu, P. Friedrich, W. Zimmer, M. Schmidt, A. Marcos-Ramiro, and A. Knoll, “Multi-task consistency for active learning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 3415–3424.
Citations (1)

Summary

We haven't generated a summary for this paper yet.