Beyond Segmentation: Road Network Generation with Multi-Modal LLMs (2310.09755v1)
Abstract: This paper introduces an innovative approach to road network generation through the utilization of a multi-modal LLM. Our model is specifically designed to process aerial images of road layouts and produce detailed, navigable road networks within the input images. The core innovation of our system lies in the unique training methodology employed for the LLM to generate road networks as its output. This approach draws inspiration from the BLIP-2 architecture arXiv:2301.12597, leveraging pre-trained frozen image encoders and LLMs to create a versatile multi-modal LLM. Our work also offers an alternative to the reasoning segmentation method proposed in the LISA paper arXiv:2308.00692. By training the LLM with our approach, the necessity for generating binary segmentation masks, as suggested in the LISA paper arXiv:2308.00692, is effectively eliminated. Experimental results underscore the efficacy of our multi-modal LLM in providing precise and valuable navigational guidance. This research represents a significant stride in bolstering autonomous navigation systems, especially in road network scenarios, where accurate guidance is of paramount importance.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.