Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Delving into Multi-modal Multi-task Foundation Models for Road Scene Understanding: From Learning Paradigm Perspectives (2402.02968v2)

Published 5 Feb 2024 in cs.CV and cs.LG

Abstract: Foundation models have indeed made a profound impact on various fields, emerging as pivotal components that significantly shape the capabilities of intelligent systems. In the context of intelligent vehicles, leveraging the power of foundation models has proven to be transformative, offering notable advancements in visual understanding. Equipped with multi-modal and multi-task learning capabilities, multi-modal multi-task visual understanding foundation models (MM-VUFMs) effectively process and fuse data from diverse modalities and simultaneously handle various driving-related tasks with powerful adaptability, contributing to a more holistic understanding of the surrounding scene. In this survey, we present a systematic analysis of MM-VUFMs specifically designed for road scenes. Our objective is not only to provide a comprehensive overview of common practices, referring to task-specific models, unified multi-modal models, unified multi-task models, and foundation model prompting techniques, but also to highlight their advanced capabilities in diverse learning paradigms. These paradigms include open-world understanding, efficient transfer for road scenes, continual learning, interactive and generative capability. Moreover, we provide insights into key challenges and future trends, such as closed-loop driving systems, interpretability, embodied driving agents, and world models. To facilitate researchers in staying abreast of the latest developments in MM-VUFMs for road scenes, we have established a continuously updated repository at https://github.com/rolsheng/MM-VUFM4DS

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Sheng Luo (30 papers)
  2. Wei Chen (1288 papers)
  3. Wanxin Tian (2 papers)
  4. Rui Liu (320 papers)
  5. Luanxuan Hou (4 papers)
  6. Xiubao Zhang (4 papers)
  7. Haifeng Shen (20 papers)
  8. Ruiqi Wu (17 papers)
  9. Shuyi Geng (1 paper)
  10. Yi Zhou (438 papers)
  11. Ling Shao (244 papers)
  12. Yi Yang (855 papers)
  13. Bojun Gao (2 papers)
  14. Qun Li (33 papers)
  15. Guobin Wu (7 papers)
Citations (7)