Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TartanDrive 2.0: More Modalities and Better Infrastructure to Further Self-Supervised Learning Research in Off-Road Driving Tasks (2402.01913v1)

Published 2 Feb 2024 in cs.RO

Abstract: We present TartanDrive 2.0, a large-scale off-road driving dataset for self-supervised learning tasks. In 2021 we released TartanDrive 1.0, which is one of the largest datasets for off-road terrain. As a follow-up to our original dataset, we collected seven hours of data at speeds of up to 15m/s with the addition of three new LiDAR sensors alongside the original camera, inertial, GPS, and proprioceptive sensors. We also release the tools we use for collecting, processing, and querying the data, including our metadata system designed to further the utility of our data. Custom infrastructure allows end users to reconfigure the data to cater to their own platforms. These tools and infrastructure alongside the dataset are useful for a variety of tasks in the field of off-road autonomy and, by releasing them, we encourage collaborative data aggregation. These resources lower the barrier to entry to utilizing large-scale datasets, thereby helping facilitate the advancement of robotics in areas such as self-supervised learning, multi-modal perception, inverse reinforcement learning, and representation learning. The dataset is available at https://github.com/castacks/tartan drive 2.0.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. P. Sun, H. Kretzschmar, X. Dotiwalla et al., “Scalability in perception for autonomous driving: Waymo open dataset,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  2. S. Ettinger, S. Cheng, B. Caine et al., “Large scale interactive motion forecasting for autonomous driving: The waymo open motion dataset.”
  3. H. Caesar, V. Bankiti, A. H. Lang et al., “nuscenes: A multimodal dataset for autonomous driving,” in CVPR, 2020.
  4. J. H. L. Z. H. C. O. B. A. V. W. Fong, R. Mohan, “Panoptic nuscenes: A large-scale benchmark for lidar panoptic segmentation and tracking,” in ICRA, 2022.
  5. S. Triest, M. Sivaprakasam, S. J. Wang et al., “Tartandrive: A large-scale dataset for learning off-road dynamics models,” in 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 2546–2552.
  6. P. Jiang, P. Osteen, M. Wigness, and S. Saripalli, “Rellis-3d dataset: Data, benchmarks and analysis,” 2020.
  7. M. Wigness, S. Eum, J. G. Rogers et al., “A rugd dataset for autonomous navigation and visual perception in unstructured outdoor environments,” in International Conference on Intelligent Robots and Systems (IROS), 2019.
  8. S. Sharma, L. Dabbiru, T. Hannis et al., “Cat: Cavs traversability dataset for off-road autonomous driving,” IEEE Access, vol. 10, pp. 24 759–24 768, 2022.
  9. D. Maturana, P.-W. Chou, M. Uenoyama, and S. Scherer, “Real-time semantic mapping for autonomous off-road navigation,” in Field and Service Robotics, M. Hutter and R. Siegwart, Eds.   Cham: Springer International Publishing, 2018, pp. 335–350.
  10. A. Datar, C. Pan, M. Nazeri, and X. Xiao, “Toward wheeled mobility on vertically challenging terrain: Platforms, datasets, and algorithms,” 2023.
  11. J. Knights, K. Vidanapathirana, M. Ramezani et al., “Wild-places: A large-scale dataset for lidar place recognition in unstructured natural environments,” 2023.
  12. A. Reinke, M. Palieri, B. Morrell et al., “Locus 2.0: Robust and computationally efficient lidar odometry for real-time 3d mapping,” IEEE Robotics and Automation Letters, pp. 1–8, 2022.
  13. A. Shaban, X. Meng, J. Lee et al., “Semantic terrain classification for off-road autonomous driving,” in Proceedings of the 5th Conference on Robot Learning, ser. Proceedings of Machine Learning Research, A. Faust, D. Hsu, and G. Neumann, Eds., vol. 164.   PMLR, 08–11 Nov 2022, pp. 619–629. [Online]. Available: https://proceedings.mlr.press/v164/shaban22a.html
  14. Z. Yang, Y. Tan, S. Sen et al., “Uncertainty-aware perception models for off-road autonomous unmanned ground vehicles,” CoRR, vol. abs/2209.11115, 2022. [Online]. Available: https://doi.org/10.48550/arXiv.2209.11115
  15. J.-B. Grill, F. Strub, F. Altché et al., “Bootstrap your own latent - a new approach to self-supervised learning,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell et al., Eds., vol. 33.   Curran Associates, Inc., 2020, pp. 21 271–21 284. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2020/file/f3ada80d5c4ee70142b17b8192b2958e-Paper.pdf
  16. M. Oquab, T. Darcet, T. Moutakanni et al., “Dinov2: Learning robust visual features without supervision,” 2023.
  17. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in Proceedings of the 37th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, H. D. III and A. Singh, Eds., vol. 119.   PMLR, 13–18 Jul 2020, pp. 1597–1607. [Online]. Available: https://proceedings.mlr.press/v119/chen20j.html
  18. X. Meng, N. Hatch, A. Lambert et al., “Terrainnet: Visual modeling of complex terrain for high-speed, off-road navigation,” 2023.
  19. E. Chen, C. Ho, M. Maulimov et al., “Learning-on-the-drive: Self-supervised adaptation of visual offroad traversability models,” 2023.
  20. M. G. Castro, S. Triest, W. Wang et al., “How does it feel? self-supervised costmap learning for off-road vehicle traversability,” 2023.
  21. S. Triest, M. G. Castro, P. Maheshwari et al., “Learning risk-aware costmaps via inverse reinforcement learning for off-road navigation,” 2023.
  22. A. Kulkarni, J. Chrosniak, E. Ducote et al., “Racecar – the dataset for high-speed autonomous racing,” 2023.
  23. J.-F. Tremblay, T. Manderson, A. Noca et al., “Multimodal dynamics modeling for off-road autonomous vehicles,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 1796–1802.
  24. M. Sivaprakasam, S. Triest, W. Wang et al., “Improving off-road planning techniques with learned costs from physical interactions,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 4844–4850.
  25. D. Shah, A. Sridhar, N. Dashora et al., “ViNT: A Foundation Model for Visual Navigation,” arXiv pre-print, 2023. [Online]. Available: https://arxiv.org/abs/2306.14846
  26. J. Mai, “System design, modelling, and control for an off-road autonomous ground vehicle,” Master’s thesis, Carnegie Mellon University, Pittsburgh, PA, July 2020.
  27. M. Sivaprakasam, S. Triest, M. G. Castro et al., “Tartandrive 1.5: Improving large multimodal robotics dataset collection and distribution,” in ICRA2023 Workshop on Pretraining for Robotics (PT4R), 2023. [Online]. Available: https://openreview.net/forum?id=7Y1pnhFJUT
  28. W. Wang, Y. Hu, and S. Scherer, “Tartanvo: A generalizable learning-based vo,” 2020.
  29. S. Zhao, H. Zhang, P. Wang et al., “Super odometry: Imu-centric lidar-visual-inertial estimator for challenging environments,” 09 2021, pp. 8729–8736.
  30. A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” International Journal of Robotics Research (IJRR), 2013.
  31. J.-F. Tremblay, M. Béland, F. Pomerleau et al., “Automatic 3d mapping for tree diameter measurements in inventory operations,” arXiv preprint arXiv:1904.05281, 2019.
  32. R. Schmid, D. Atha, F. Schöller et al., “Self-supervised traversability prediction by learning to reconstruct safe terrain,” 2022.
  33. A. Srivastav and S. Mandal, “Radars for autonomous driving: A review of deep learning methods and challenges,” 2023.
  34. M. Brenner, N. H. Reyes, T. Susnjak, and A. L. C. Barczak, “RGB-d and thermal sensor fusion: A systematic literature review,” IEEE Access, vol. 11, pp. 82 410–82 442, 2023. [Online]. Available: https://doi.org/10.1109%2Faccess.2023.3301119
  35. OpenAI, “Gpt-4 technical report,” 2023.
  36. A. Radford, J. W. Kim, C. Hallacy et al., “Learning transferable visual models from natural language supervision,” in Proceedings of the 38th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, M. Meila and T. Zhang, Eds., vol. 139.   PMLR, 18–24 Jul 2021, pp. 8748–8763. [Online]. Available: https://proceedings.mlr.press/v139/radford21a.html
  37. A. Brohan, N. Brown, J. Carbajal et al., “Rt-1: Robotics transformer for real-world control at scale,” in arXiv preprint arXiv:2212.06817, 2022.
  38. S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, “Chatgpt for robotics: Design principles and model abilities,” Microsoft, Tech. Rep. MSR-TR-2023-8, February 2023. [Online]. Available: https://www.microsoft.com/en-us/research/publication/chatgpt-for-robotics-design-principles-and-model-abilities/
  39. M. Shridhar, L. Manuelli, and D. Fox, “Cliport: What and where pathways for robotic manipulation,” in Conference on Robot Learning.   PMLR, 2022, pp. 894–906.
  40. D. Shah, B. Osiński, S. Levine et al., “Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action,” in Conference on Robot Learning.   PMLR, 2023, pp. 492–504.
  41. J. Zürn, W. Burgard, and A. Valada, “Self-supervised visual terrain classification from unsupervised acoustic feature learning,” IEEE Transactions on Robotics, vol. 37, no. 2, pp. 466–481, 2021.
Citations (8)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com