Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Trajectory Prediction for Autonomous Driving Using a Transformer Network (2402.16501v1)

Published 26 Feb 2024 in cs.RO

Abstract: Predicting the trajectories of surrounding agents is still considered one of the most challenging tasks for autonomous driving. In this paper, we introduce a multi-modal trajectory prediction framework based on the transformer network. The semantic maps of each agent are used as inputs to convolutional networks to automatically derive relevant contextual information. A novel auxiliary loss that penalizes unfeasible off-road predictions is also proposed in this study. Experiments on the Lyft l5kit dataset show that the proposed model achieves state-of-the-art performance, substantially improving the accuracy and feasibility of the prediction outcomes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (9)
  1. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018.
  2. Multimodal trajectory predictions for autonomous driving using deep convolutional networks. In 2019 International Conference on Robotics and Automation (ICRA), pages 2090–2096. IEEE, 2019.
  3. One thousand and one hours: Self-driving motion prediction dataset. arXiv preprint arXiv:2006.14480, 2020.
  4. The trajectron: Probabilistic multi-agent trajectory modeling with dynamic spatiotemporal graphs. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2375–2384, 2019.
  5. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451, 2020.
  6. What the constant velocity model can teach us about pedestrian motion prediction. IEEE Robotics and Automation Letters, 5(2):1696–1703, 2020.
  7. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215, 2014.
  8. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
  9. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020.

Summary

We haven't generated a summary for this paper yet.