Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 156 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 58 tok/s Pro
Kimi K2 187 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use (2308.06595v4)

Published 12 Aug 2023 in cs.CL, cs.AI, and cs.CV

Abstract: We introduce VisIT-Bench (Visual InsTruction Benchmark), a benchmark for evaluation of instruction-following vision-LLMs for real-world use. Our starting point is curating 70 'instruction families' that we envision instruction tuned vision-LLMs should be able to address. Extending beyond evaluations like VQAv2 and COCO, tasks range from basic recognition to game playing and creative generation. Following curation, our dataset comprises 592 test queries, each with a human-authored instruction-conditioned caption. These descriptions surface instruction-specific factors, e.g., for an instruction asking about the accessibility of a storefront for wheelchair users, the instruction-conditioned caption describes ramps/potential obstacles. These descriptions enable 1) collecting human-verified reference outputs for each instance; and 2) automatic evaluation of candidate multimodal generations using a text-only LLM, aligning with human judgment. We quantify quality gaps between models and references using both human and automatic evaluations; e.g., the top-performing instruction-following model wins against the GPT-4 reference in just 27% of the comparison. VisIT-Bench is dynamic to participate, practitioners simply submit their model's response on the project website; Data, code and leaderboard is available at visit-bench.github.io.

Citations (63)

Summary

  • The paper introduces VisIT-Bench, a benchmark that evaluates instruction-following vision-language models on 70 realistic task families with 592 queries.
  • It employs both human and GPT-4 automated assessments to reveal notable competency gaps, with LLaMA-Adapter-v2 winning only 27.4% against human references.
  • The benchmark’s flexible framework supports model submissions and continuous improvements, advancing real-world AI evaluation and multimodal research.

VisIT-Bench: A Vision-Language Instruction Benchmark for Real-World Applications

The paper introduces VisIT-Bench, a benchmark specifically designed to evaluate instruction-following vision-LLMs under realistic conditions. The research addresses the longstanding challenge in artificial intelligence: developing general-purpose assistants capable of executing diverse, previously unseen tasks in collaboration with humans. Unlike conventional benchmarks that focus on fixed task-specific performance, VisIT-Bench offers a dynamic testing ground, encompassing a wide array of 70 instruction families that mirror real-world applications.

The benchmark includes 592 challenging test queries, each paired with an instruction-conditioned caption. These captions ensure the precision of multimodal evaluations, allowing both human-verified and automatic assessments. The benchmark stands out by extending beyond standard evaluations like VQAv2 and COCO, exploring tasks that range from basic recognition to complex reasoning, creative generation, and game playing. A key feature of VisIT-Bench is its facilitation of participant engagement through model submission on the project's website, with available data, code, and a leaderboard.

The methodology for VisIT-Bench’s creation involved data curation based on projected capabilities of instruction-tuned vision-LLMs. This leads to a comprehensive dataset covering ten instances per instruction family, totaling 1,159 public images. Human annotators were employed for the collection and verification of responses, ensuring quality outputs that surpass standard automated systems. Annotations revealed that instruction-conditioned captions significantly improved task comprehension and completion.

Through empirical analysis, the paper demonstrates substantial model competency gaps using both human evaluations and automated systems. Notably, the LLaMA-Adapter-v2 model displayed a win rate against human-verified references of only 27.4%, highlighting the current limitations of instruction-following models relative to human judgment. Automated evaluations, involving GPT-4 assessments, aligned with human preferences in 94% of unanimous cases, illustrating the effectiveness of the benchmark's quantitative analysis methods.

The implications of the VisIT-Bench benchmark are multifold. Practically, it provides insights into model performance in scenarios akin to human engagement, enabling a nuanced understanding of model strengths and weaknesses across diverse tasks. Theoretically, it bolsters the vision-language research paradigm, encouraging innovations that address the gap between human and AI capabilities. Given its open-ended nature, VisIT-Bench serves as a platform for continuously evaluating, refining, and advancing multimodal models, promoting transparency and collaborative progress within the AI community.

Future prospects highlighted by the paper suggest expansions in task categories, increased dataset instances per family, inclusion of additional modalities like audio and video, and exploration of multi-turn dialogues to enrich interaction models further. Despite its extensive coverage, the paper acknowledges constraints, notably the present focus on single-turn image-text tasks and the current exclusion of other interaction forms. Despite these limitations, VisIT-Bench stands as a substantial contribution to the evolving landscape of AI evaluation benchmarks. It offers a robust framework for enhancing the alignment of AI models with real-world applications, driving forward the quest for versatile, high-functioning AI systems.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Youtube Logo Streamline Icon: https://streamlinehq.com