Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents (2408.06327v1)

Published 12 Aug 2024 in cs.AI, cs.CL, and cs.CV
VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents

Abstract: Large Multimodal Models (LMMs) have ushered in a new era in artificial intelligence, merging capabilities in both language and vision to form highly capable Visual Foundation Agents. These agents are postulated to excel across a myriad of tasks, potentially approaching general artificial intelligence. However, existing benchmarks fail to sufficiently challenge or showcase the full potential of LMMs in complex, real-world environments. To address this gap, we introduce VisualAgentBench (VAB), a comprehensive and pioneering benchmark specifically designed to train and evaluate LMMs as visual foundation agents across diverse scenarios, including Embodied, Graphical User Interface, and Visual Design, with tasks formulated to probe the depth of LMMs' understanding and interaction capabilities. Through rigorous testing across nine proprietary LMM APIs and eight open models, we demonstrate the considerable yet still developing agent capabilities of these models. Additionally, VAB constructs a trajectory training set constructed through hybrid methods including Program-based Solvers, LMM Agent Bootstrapping, and Human Demonstrations, promoting substantial performance improvements in LMMs through behavior cloning. Our work not only aims to benchmark existing models but also provides a solid foundation for future development into visual foundation agents. Code, train & test data, and part of fine-tuned open LMMs are available at \url{https://github.com/THUDM/VisualAgentBench}.

VisualAgentBench: Benchmarking LMMs as Visual Foundation Agents

The paper introduces VisualAgentBench (VAB), a systematic and comprehensive benchmark designed to evaluate Large Multimodal Models (LMMs) as visual foundation agents. These agents possess the potential to operate across diverse scenarios, integrating both language and vision capabilities. Visual foundation agents aim to excel in multitask environments, similar to LLMs operating in text-based tasks. However, existing benchmarks are insufficient to challenge LMMs in real-world environments, prompting the development of VAB.

VAB encompasses a series of environments that simulate practical challenges faced by LMMs, categorized into Embodied, Graphical User Interface (GUI), and Visual Design modalities. The benchmark facilitates training and evaluation through a diverse set of tasks:

  1. Embodied Scenarios: VAB-OmniGibson and VAB-Minecraft simulate household and game environments, challenging LMMs with object manipulation and resource collection tasks, respectively. These tasks demand effective interaction with the environment and long-term planning capabilities.
  2. Graphical User Interface Scenarios: VAB-Mobile and VAB-WebArena-Lite are designed for testing GUI agents. They require understanding complex user interfaces in mobile and web applications, necessitating high-level decision making and interaction capabilities.
  3. Visual Design Scenarios: VAB-CSS tasks evaluate LMMs' abilities in web frontend design, where agents are required to fix CSS style issues through iterative problem-solving, which involves both aesthetic and functional reasoning.

Each scenario in VAB presents tailored challenges for LMMs, demanding robust visual grounding and planning abilities. LMMs are evaluated through interactive scenarios where success is measured by agents' capabilities to adapt and solve complex problems using multimodal inputs.

Key Contributions and Implications:

  • Introduction of VAB: By creating this benchmark, the paper provides a new standard for assessing LMMs, expanding the scope of evaluation beyond traditional tasks like Visual Question Answering (VQA) or Optical Character Recognition (OCR). VAB aligns more closely with real-world scenarios that foundation models are likely to encounter.
  • Training via Trajectory Data: VAB leverages a hybrid data curation pipeline to construct a training set of 4,482 trajectories for behavior cloning. This methodology significantly improves open LMMs' performances when fine-tuned on these datasets.
  • Evaluation of Proprietary and Open LMMs: The results indicate a substantial gap between existing proprietary LMM APIs and fine-tuned open models, with top proprietary models like gpt-4o demonstrating better success rates in complex environments. However, the benchmark reveals potential pathways for open models to become competitive through enhanced training strategies.

Future Directions and Conclusions:

The introduction of VAB underscores the need to bridge the gap between proprietary and open LMMs in terms of their agent capabilities. The diverse and realistic challenges posed by VAB provide a rich testbed for future research into improving LMMs’ multimodal reasoning abilities. Future developments aim to include reinforcement learning strategies to complement behavior cloning, further advancing LMMs toward achieving more general and versatile visual foundation agent capabilities. The benchmark not only highlights the current state and potential of LMMs but also sets a trajectory for future developments in AGI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (30)
  1. Xiao Liu (402 papers)
  2. Tianjie Zhang (10 papers)
  3. Yu Gu (218 papers)
  4. Iat Long Iong (4 papers)
  5. Yifan Xu (92 papers)
  6. Xixuan Song (5 papers)
  7. Shudan Zhang (7 papers)
  8. Hanyu Lai (11 papers)
  9. Xinyi Liu (58 papers)
  10. Hanlin Zhao (5 papers)
  11. Jiadai Sun (16 papers)
  12. Xinyue Yang (6 papers)
  13. Yu Yang (213 papers)
  14. Zehan Qi (13 papers)
  15. Shuntian Yao (4 papers)
  16. Xueqiao Sun (3 papers)
  17. Siyi Cheng (3 papers)
  18. Qinkai Zheng (12 papers)
  19. Hao Yu (195 papers)
  20. Hanchen Zhang (5 papers)
Citations (13)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com