Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 144 tok/s
Gemini 2.5 Pro 45 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 73 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 438 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Falcon: A Remote Sensing Vision-Language Foundation Model (2503.11070v1)

Published 14 Mar 2025 in cs.CV

Abstract: This paper introduces a holistic vision-language foundation model tailored for remote sensing, named Falcon. Falcon offers a unified, prompt-based paradigm that effectively executes comprehensive and complex remote sensing tasks. Falcon demonstrates powerful understanding and reasoning abilities at the image, region, and pixel levels. Specifically, given simple natural language instructions and remote sensing images, Falcon can produce impressive results in text form across 14 distinct tasks, i.e., image classification, object detection, segmentation, image captioning, and etc. To facilitate Falcon's training and empower its representation capacity to encode rich spatial and semantic information, we developed Falcon_SFT, a large-scale, multi-task, instruction-tuning dataset in the field of remote sensing. The Falcon_SFT dataset consists of approximately 78 million high-quality data samples, covering 5.6 million multi-spatial resolution and multi-view remote sensing images with diverse instructions. It features hierarchical annotations and undergoes manual sampling verification to ensure high data quality and reliability. Extensive comparative experiments are conducted, which verify that Falcon achieves remarkable performance over 67 datasets and 14 tasks, despite having only 0.7B parameters. We release the complete dataset, code, and model weights at https://github.com/TianHuiLab/Falcon, hoping to help further develop the open-source community.

Summary

  • The paper introduces Falcon, a 0.7B parameter vision-language foundation model designed for remote sensing, utilizing a unified prompt-based approach for various tasks.
  • A key innovation is Falcon_SFT, a large-scale (78M samples, 5.6M images), multi-task instruction-tuning dataset that enables Falcon's robust training and versatility.
  • Falcon achieves state-of-the-art performance across 14 tasks and 67 datasets, demonstrating strong generalization and potential for practical applications in remote sensing.

Falcon: A Remote Sensing Vision-Language Foundation Model

This paper introduces Falcon, a vision-language foundation model explicitly tailored for remote sensing applications. Falcon is designed to integrate vision and language in a holistic paradigm, leveraging a unified, prompt-based approach to perform a multitude of complex remote sensing tasks. The model demonstrates exceptional understanding and reasoning abilities across different levels of image processing tasks, corroborated by robust evaluations across 14 distinct tasks such as image classification, object detection, segmentation, and image captioning.

A crucial innovation underpinning Falcon's capabilities is the development of Falcon_SFT, a large-scale, multi-task instruction-tuning dataset. This dataset is composed of approximately 78 million high-quality samples encompassing 5.6 million remote sensing images with diverse resolution and viewpoint attributes. Falcon_SFT is characterized by its comprehensive annotation hierarchy and rigorous sample quality verification.

The paper highlights Falcon's capacity to achieve remarkable performance across various benchmarks, surpassing existing state-of-the-art models despite its relatively modest architecture of 0.7 billion parameters. Extensive evaluations confirm Falcon's efficacy over 67 datasets and in performing the defined tasks effectively, setting a new standard in the domain of remote sensing.

One of the significant conceptual advancements presented is tackling the domain and knowledge gap traditionally existing between natural images and remote sensing data. Previous works have primarily concentrated on task-specific models for remote sensing, which has constrained their scalability and adaptability. Falcon aims to address these limitations by serving as a more versatile and comprehensive foundational model capable of executing reasoning tasks at different granularity levels.

Additionally, the paper emphasizes Falcon’s success in data-driven training facilitated by Falcon_SFT. This dataset leverages a broad array of remote sensing images and includes innovative data annotation techniques, ensuring the training model learns robust and generalizable representations. The dataset extends beyond typical remote sensing annotations by incorporating hierarchical annotations, enhancing the model’s versatility and application range.

The architecture of Falcon integrates an image encoder and a multi-modality encoder-decoder, with the capacity to transform a singular or paired image input into a universal textual output. Furthermore, Falcon adopts a dynamic prompt training strategy, uniquely designed to process diverse instruction formats—thus enhancing its instinctual comprehension of task prompts.

Falcon’s utility extends beyond immediate task performance benefits; it holds potential for significant practical applications in the remote sensing domain, such as land cover classification, urban planning, and environmental monitoring. The model's adaptability to diverse tasks underscores its potential as a baseline platform for further advancements in remote sensing vision-LLMs.

The paper concludes with a commitment to open-source the complete dataset, source code, and model weights, which is expected to catalyze further exploration and development in the community. Such transparency and collaboration are pivotal for stimulating innovation and advancing the state-of-the-art in remote sensing AI.

Future directions for this research include refining the model's performance on more nuanced and complex remote sensing tasks, exploring the integration with additional types of non-image data, and further reducing computational demands without sacrificing accuracy. Considering Falcon's promising results, it has the capacity to significantly shape the future of AI applications in remote sensing.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 3 tweets and received 38 likes.

Upgrade to Pro to view all of the tweets about this paper:

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube