VHM: Versatile and Honest Vision Language Model for Remote Sensing Image Analysis (2403.20213v4)
Abstract: This paper develops a Versatile and Honest vision LLM (VHM) for remote sensing image analysis. VHM is built on a large-scale remote sensing image-text dataset with rich-content captions (VersaD), and an honest instruction dataset comprising both factual and deceptive questions (HnstD). Unlike prevailing remote sensing image-text datasets, in which image captions focus on a few prominent objects and their relationships, VersaD captions provide detailed information about image properties, object attributes, and the overall scene. This comprehensive captioning enables VHM to thoroughly understand remote sensing images and perform diverse remote sensing tasks. Moreover, different from existing remote sensing instruction datasets that only include factual questions, HnstD contains additional deceptive questions stemming from the non-existence of objects. This feature prevents VHM from producing affirmative answers to nonsense queries, thereby ensuring its honesty. In our experiments, VHM significantly outperforms various vision LLMs on common tasks of scene classification, visual question answering, and visual grounding. Additionally, VHM achieves competent performance on several unexplored tasks, such as building vectorizing, multi-label classification and honest question answering. We will release the code, data and model weights at https://github.com/opendatalab/VHM .
- CrowdAI: Crowdai mapping challenge. https://www.crowdai.org/challenges/mapping-challenge (2018), accessed on: 2021-02-26
- ISPRS: International society for photogrammetry and remote sensing: 2d semantic labeling challenge. https://www.isprs.org/education/benchmarks/UrbanSemLab/2d-sem-label-potsdam.aspx (2016)
- OpenAI: Gpt-4v(ision) system card. https://openai.com/research/gpt-4v-system-card
- Chao Pang (23 papers)
- Jiang Wu (58 papers)
- Jiayu Li (100 papers)
- Yi Liu (543 papers)
- Jiaxing Sun (4 papers)
- Weijia Li (39 papers)
- Xingxing Weng (5 papers)
- Shuai Wang (466 papers)
- Litong Feng (22 papers)
- Gui-Song Xia (139 papers)
- Conghui He (114 papers)