Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GeoGuess: Multimodal Reasoning based on Hierarchy of Visual Information in Street View (2506.16633v1)

Published 19 Jun 2025 in cs.CL, cs.AI, and cs.MM

Abstract: Multimodal reasoning is a process of understanding, integrating and inferring information across different data modalities. It has recently attracted surging academic attention as a benchmark for AI. Although there are various tasks for evaluating multimodal reasoning ability, they still have limitations. Lack of reasoning on hierarchical visual clues at different levels of granularity, e.g., local details and global context, is of little discussion, despite its frequent involvement in real scenarios. To bridge the gap, we introduce a novel and challenging task for multimodal reasoning, namely GeoGuess. Given a street view image, the task is to identify its location and provide a detailed explanation. A system that succeeds in GeoGuess should be able to detect tiny visual clues, perceive the broader landscape, and associate with vast geographic knowledge. Therefore, GeoGuess would require the ability to reason between hierarchical visual information and geographic knowledge. In this work, we establish a benchmark for GeoGuess by introducing a specially curated dataset GeoExplain which consists of panoramas-geocoordinates-explanation tuples. Additionally, we present a multimodal and multilevel reasoning method, namely SightSense which can make prediction and generate comprehensive explanation based on hierarchy of visual information and external knowledge. Our analysis and experiments demonstrate their outstanding performance in GeoGuess.

Summary

  • The paper introduces GeoGuess, a task that combines geo-localization with explainable multimodal reasoning using hierarchical visual cues.
  • It presents the SightSense architecture, integrating local, global, and knowledge-augmented information to improve geographic predictions.
  • Empirical results demonstrate substantial accuracy improvements and superior explanation quality over baselines, highlighting its practical impact.

GeoGuess: Multimodal Reasoning on Hierarchical Visual Cues in Street View

"GeoGuess: Multimodal Reasoning based on Hierarchy of Visual Information in Street View" (2506.16633) addresses a substantial gap in current multimodal AI research: the ability to perform spatial-geographic reasoning from complex real-world visual data at multiple levels of granularity. The paper introduces both a novel task (GeoGuess), a purpose-built dataset (GeoExplain), and a multi-stage model (SightSense) that systematically incorporates local, global, and knowledge-augmented cues to achieve interpretable and more accurate geo-localization.

Task and Dataset: GeoGuess and GeoExplain

GeoGuess moves beyond traditional image geolocalization by requiring models not only to predict the geographic location from a Google Street View panorama, but also to generate a textual explanation justifying the prediction using visual evidence from both fine-grained details (such as road signs, car plates, or vegetation) and global scene context (such as landscape, architecture, or road layout), and to integrate external geographic knowledge when necessary.

The companion dataset, GeoExplain, addresses the absence of suitable training and evaluation resources for this form of expert-level multimodal reasoning. It consists of over 5,400 location tuples, each comprising:

  • A set of panoramic street view images,
  • Precise geo-coordinates,
  • Human-written explanations (sourced from expert GeoGuessr players on Reddit) indicating how the prediction can be made from cues present in the images.

A key feature is the explicit human annotation of both the visual clues and the reasoning process, ensuring that models can be evaluated not just on prediction but also on their interpretability and use of evidence.

The location distribution covers a broad spectrum of continents and landscapes, and the difficulty analysis—through human studies—demonstrates that GeoExplain demands multi-level visual reasoning well beyond what prior datasets such as ScienceQA or GeoQA require.

Methodology: The SightSense Architecture

SightSense adopts a three-stage reasoning pipeline, designed to mimic the multifaceted way humans solve the GeoGuessr task:

  1. Visual Clue Detection: Leveraging an open-vocabulary object detector (Grounding DINO), the system extracts fine-grained, potentially discriminative elements from the input panorama. The object prompts are carefully designed using noun distributions from human explanations to maximize coverage of relevant detail.
  2. Multimodal Knowledge Retrieval: The detected objects are matched against a curated GeoKnowledge base—comprising country-indexed, image-snippet pairs associated with succinct factoids—using CLIP-based visual embeddings and cosine similarity. Top-k matches bring in supplemental textual and visual context critical for non-obvious predications and to reduce hallucination.
  3. Reasoning Generation: A large multimodal LLM (Qwen2VL-7B-Instruct), fine-tuned via LoRA on GeoExplain, receives the concatenated global panoramas, local object crops, and the retrieved knowledge snippets as multimodal input. Prompts are structured to elicit outputs in the form: "PLACE {COUNTRY, STATE, CITY, STREET}. EXPLANATION." This format standardizes evaluation and enforces interpretability.

All object detection and knowledge retrieval stages are frozen during fine-tuning; only reasoning-generation parameters are updated, which is computationally efficient and reduces overfitting given limited data.

Empirical Findings

Quantitative and qualitative evaluations affirm the efficacy of both the dataset and model:

  • Location Accuracy:

SightSense attains a country-level accuracy of 60.97%, compared to 16.8% for a strong Qwen2VL baseline. For state- and city-level, the model achieves 23.88% and 6.11% respectively—substantially outperforming existing approaches on all levels. Notably, no model approaches human expert performance, validating the expert-level complexity of the task.

  • Explanation Quality:

In BLEU-4, ROUGE-L, CIDEr, METEOR, and BERTScore metrics, SightSense consistently outperforms baseline models, with up to 5.09 BLEU-4 and 87.71 BERTScore. Human studies further confirm improvements in evidence citation, use of external knowledge, and logical soundness of reasoning (SightSense overall rating: 1.58/2, compared to 0.67/2 for Qwen2VL).

  • Ablation Analysis:

Each module—object detection, knowledge retrieval, reasoning—provides additive gains. The pipeline’s modular design also generalizes: transplanting the first two modules into other LLMs uniformly improves their performance, indicating the broad value of hierarchical and knowledge-augmented approaches.

  • Qualitative Case Studies:

SightSense can identify subtle cues (e.g., Michigan license plate), combine with contextual landscape features, connect to external facts (e.g., state-specific plate designs), and formulate a correct, well-supported explanation. By contrast, baseline LLMs either hallucinate, ignore fine details, or make vague and unsupported guesses.

Theoretical and Practical Implications

This work demonstrates that genuine, expert-level multimodal reasoning cannot be achieved via brute-force scaling of generic LLMs or VL-models alone; hierarchical cue extraction and explicit retrieval-based augmentation are essential for robust, interpretable geographic reasoning.

Practically, this has immediate implications for autonomous navigation, urban planning, disaster response, and augmented reality systems—where not only accuracy but also traceable justifications for inferences are critical.

The theoretical implications lie in the argument for decomposable, modular architectures for multimodal tasks characterized by (1) high granularity in visual features, (2) the need for external knowledge integration, and (3) demand for explanation generation. The approach is likely beneficial for a broader class of "evidence-tracing" multimodal tasks beyond geo-localization.

Directions for Future AI Research

  • Model Generalization:

Extending this approach to open-world settings, with richer and more diverse geographic/cultural coverage, or adapting the methodology for video-based or temporally-evolving scenes.

  • Dataset Expansion:

Increasing scale and diversity in human explanations, adding adversarial or rare-location scenarios, and evaluating on multilingual and multi-modal explanation generation.

  • Adaptive Retrieval & Reasoning:

Integrating adaptive querying mechanisms for external knowledge and context-sensitive reasoning could further improve both accuracy and transparency.

  • AGI Benchmarks:

Given that GeoGuess remains unsolved at a human expert level, GeoExplain and similar datasets could serve as sensitive benchmarks for progress toward artificial general intelligence, especially in domains requiring grounded, evidence-driven inference.

In summary, this work delivers a high-quality benchmark and a modular, interpretable approach to hierarchical multimodal reasoning, substantiated by strong empirical results and clear implications for both research and application in AI systems requiring explainable visual-geographic inference.