Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GAIA: a benchmark for General AI Assistants (2311.12983v1)

Published 21 Nov 2023 in cs.CL and cs.AI

Abstract: We introduce GAIA, a benchmark for General AI Assistants that, if solved, would represent a milestone in AI research. GAIA proposes real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency. GAIA questions are conceptually simple for humans yet challenging for most advanced AIs: we show that human respondents obtain 92\% vs. 15\% for GPT-4 equipped with plugins. This notable performance disparity contrasts with the recent trend of LLMs outperforming humans on tasks requiring professional skills in e.g. law or chemistry. GAIA's philosophy departs from the current trend in AI benchmarks suggesting to target tasks that are ever more difficult for humans. We posit that the advent of AGI hinges on a system's capability to exhibit similar robustness as the average human does on such questions. Using GAIA's methodology, we devise 466 questions and their answer. We release our questions while retaining answers to 300 of them to power a leader-board available at https://huggingface.co/gaia-benchmark.

Citations (90)

Summary

  • The paper presents GAIA, a benchmark with 466 rigorously crafted questions designed to evaluate fundamental AI capabilities such as reasoning and multimodal integration.
  • The paper shows that current AI models, including GPT-4, achieve only a fraction of human performance in tasks requiring complex generation and precise answers.
  • The paper emphasizes the need for evolving benchmarks to include diverse linguistic and cultural content and to address challenges like reproducibility in AI evaluations.

Introduction

The General Artificial Intelligence Assistant (GAIA) benchmark presented in this paper is designed to evaluate AI systems, particularly their fundamental capabilities such as reasoning, handling multimodal information, web browsing, and the proficient use of tools. Traditional tasks that AI can be trained to perform may not be sufficient to measure advancements towards AGI. Therefore, GAIA introduces a novel set of challenges that are conceptually simple for humans but difficult for AI, thus revealing the performance disparity between humans and LLMs, such as GPT-4, in contexts where humans excel but LLMs struggle.

Defining a New Benchmark

GAIA is a set of 466 rigorously crafted questions that require complex generation and yet provide a single, factual answer for easy, robust evaluation. The benchmark avoids situations that could lead to gaming the system, emphasizing real-world unpredictability where responses cannot easily be brute-forced due to diversity. Acknowledging the remarkable human-like performance of AI in specific professional tasks, GAIA redirects focus on fundamental adeptness over specialized knowledge, thereby providing a more accurate representation of AI's AGI readiness.

Evaluating AI Performance on GAIA

Assessing AI systems with GAIA entails automated, exact answer matching against the ground truth. This task shines light on the disparity between human and AI performance, with current AI systems showing only a small fraction of human success rates, despite being equipped with various tools. Such results indicate that while some progress has been made in augmenting LLMs with capabilities beyond text understanding, significant opportunities for improvement remain in realizing fully competent AI systems in real-world settings.

Implications and Future Directions

Creating and maintaining a benchmark like GAIA underscores the nuanced challenges in evaluating generative AI models. The GAIA benchmark also highlights issues such as reproducibility in closed-source AI models and the need for possible benchmark evolution to remain relevant. With its emphasis on language diversity, GAIA is acknowledged as a first step but admits its own limitations, such as an over-representation of English-language content. The creators of GAIA envisage that the benchmark will foster community involvement, eventually leading to a more linguistically and culturally diverse set of AI evaluation criteria.

Youtube Logo Streamline Icon: https://streamlinehq.com