Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 152 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Claude Sonnet 4.5 26 tok/s Pro
2000 character limit reached

VisualWebArena: Benchmark for Multimodal Web Agents

Updated 29 October 2025
  • VisualWebArena is a benchmark that evaluates multimodal agents through realistic web tasks combining both visual and textual inputs.
  • It features three environments—Classifieds, Shopping, and Reddit—with 910 tasks designed to test image-text comprehension, spatial reasoning, and decision-making.
  • Baseline results reveal that even advanced multimodal agents achieve only a 16.4% success rate, underlining challenges in OCR, spatial reasoning, and contextual understanding.

VisualWebArena (VWA) is a specialized benchmark designed to evaluate the performance of multimodal agents on realistic, visually grounded web tasks. This benchmark incorporates complex visual and textual elements to simulate tasks encountered in real-world web environments, challenging agents to employ both vision and language understanding capabilities.

1. Motivations Behind VisualWebArena

The primary motivation for developing VisualWebArena was to address the limitations of previous web agent benchmarks that predominantly focused on text-only tasks. Real-world web interactions often require the integration of visual information—such as images, layout, and color—which text-only agents struggle to process effectively. VWA aims to provide a comprehensive evaluation framework that considers these visually rich elements, thereby bridging a critical gap in assessing the capabilities of multimodal web agents.

2. Task Composition and Structural Diversity

Environments

VisualWebArena includes three realistic web environments:

  • Classifieds: Inspired by real marketplaces like Craigslist, it features over 65,000 listings that include images and textual descriptions.
  • Shopping: This environment models e-commerce platforms with data scraped from sources like Amazon.
  • Reddit: A social forum environment with posts containing diverse images and text.

Task Variety and Complexity

VWA consists of 910 tasks spread across these environments. Each task is designed to evaluate different aspects of an agent's capabilities, such as image-text comprehension and high-level decision-making. Tasks are categorized as easy, medium, or hard based on action and visual difficulty, with specific instances requiring sophisticated visual reasoning or spatial understanding.

3. Observational and Action Spaces

Observation Space

Agents in VWA utilize a multi-faceted observation space, including:

  • Webpage Representations: Agents can access raw HTML, augmented screenshots, and accessibility trees.
  • Set-of-Marks (SoM): This is a key feature, providing screenshots annotated with bounding boxes and IDs for interactable elements, enhancing the agents’ ability to ground visual information and interact meaningfully with the page.

Action Space

The action space encompasses a comprehensive suite of commands such as click, hover, type, as well as navigational controls like opening and closing tabs or scrolling. These actions allow agents to interact with and manipulate the web environments fully.

4. Evaluation and Reward Mechanisms

VWA employs a robust programmatic execution model to evaluate agents. Tasks are assessed using a binary success metric, derived from whether the agent's trajectory fulfills the task's objectives. This evaluation incorporates several specific reward functions, including exact and fuzzy textual matches, visual question answering (VQA) metrics, and structural similarity indexes (SSIM) for image comparisons.

5. Baseline Models and Performance Evaluation

Several baseline models were evaluated using VWA to benchmark current capabilities:

  • Text-based LLMs: Models like GPT-4 and other caption-enhanced LLMs showed limited success.
  • Multimodal Agents: Agents capable of processing both text and images, such as GPT-4V with SoM, demonstrated superior performance.

Despite being the best performing model, even the top-performing agents achieved only a 16.4% success rate, significantly lower than human performance at 88.7%.

6. Identified Challenges and Future Directions

VisualWebArena has highlighted significant challenges for current multimodal agents. Text LLMs often fail to understand complex visual tasks, and although some multimodal models perform better, there remains a substantial gap to human performance. This gap is most pronounced in tasks involving OCR, spatial reasoning, and maintaining context over extended interactions.

The results suggest a need for improving agent frameworks to better integrate visual and linguistic data, enhance long-term memory and planning capabilities, and resolve complex visual-spatial reasoning challenges.

Conclusion

VisualWebArena provides a critical diagnostics tool for assessing multimodal web agents, combining a diverse task set with detailed evaluation metrics to guide future research and development. Its emphasis on realistic web environments and visually grounded tasks makes it an essential benchmark for developing the next generation of autonomous agents capable of navigating the visually complex web landscapes modern users interact with daily.

For detailed information and access to the benchmark resources, see: VisualWebArena on GitHub.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to VisualWebArena (VWA).