Papers
Topics
Authors
Recent
Search
2000 character limit reached

VRSBench: A Versatile Vision-Language Benchmark Dataset for Remote Sensing Image Understanding

Published 18 Jun 2024 in cs.CV | (2406.12384v2)

Abstract: We introduce a new benchmark designed to advance the development of general-purpose, large-scale vision-LLMs for remote sensing images. Although several vision-language datasets in remote sensing have been proposed to pursue this goal, existing datasets are typically tailored to single tasks, lack detailed object information, or suffer from inadequate quality control. Exploring these improvement opportunities, we present a Versatile vision-language Benchmark for Remote Sensing image understanding, termed VRSBench. This benchmark comprises 29,614 images, with 29,614 human-verified detailed captions, 52,472 object references, and 123,221 question-answer pairs. It facilitates the training and evaluation of vision-LLMs across a broad spectrum of remote sensing image understanding tasks. We further evaluated state-of-the-art models on this benchmark for three vision-language tasks: image captioning, visual grounding, and visual question answering. Our work aims to significantly contribute to the development of advanced vision-LLMs in the field of remote sensing. The data and code can be accessed at https://github.com/lx709/VRSBench.

Citations (5)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

GitHub

  1. GitHub - lx709/VRSBench (28 stars)