On the Promises and Challenges of Multimodal Foundation Models for Geographical, Environmental, Agricultural, and Urban Planning Applications (2312.17016v1)
Abstract: The advent of LLMs has heightened interest in their potential for multimodal applications that integrate language and vision. This paper explores the capabilities of GPT-4V in the realms of geography, environmental science, agriculture, and urban planning by evaluating its performance across a variety of tasks. Data sources comprise satellite imagery, aerial photos, ground-level images, field images, and public datasets. The model is evaluated on a series of tasks including geo-localization, textual data extraction from maps, remote sensing image classification, visual question answering, crop type identification, disease/pest/weed recognition, chicken behavior analysis, agricultural object counting, urban planning knowledge question answering, and plan generation. The results indicate the potential of GPT-4V in geo-localization, land cover classification, visual question answering, and basic image understanding. However, there are limitations in several tasks requiring fine-grained recognition and precise counting. While zero-shot learning shows promise, performance varies across problem domains and image complexities. The work provides novel insights into GPT-4V's capabilities and limitations for real-world geospatial, environmental, agricultural, and urban planning challenges. Further research should focus on augmenting the model's knowledge and reasoning for specialized domains through expanded training. Overall, the analysis demonstrates foundational multimodal intelligence, highlighting the potential of multimodal foundation models (FMs) to advance interdisciplinary applications at the nexus of computer vision and language.
- Chenjiao Tan (2 papers)
- Qian Cao (36 papers)
- Yiwei Li (107 papers)
- Jielu Zhang (7 papers)
- Xiao Yang (158 papers)
- Huaqin Zhao (16 papers)
- Zihao Wu (100 papers)
- Zhengliang Liu (91 papers)
- Hao Yang (328 papers)
- Nemin Wu (3 papers)
- Tao Tang (87 papers)
- Xinyue Ye (24 papers)
- Lilong Chai (6 papers)
- Ninghao Liu (98 papers)
- Changying Li (9 papers)
- Lan Mu (5 papers)
- Tianming Liu (161 papers)
- Gengchen Mai (46 papers)