Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AgentBench: Evaluating LLMs as Agents (2308.03688v2)

Published 7 Aug 2023 in cs.AI, cs.CL, and cs.LG
AgentBench: Evaluating LLMs as Agents

Abstract: LLMs are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.

AgentBench: Evaluating LLMs as Agents

This paper introduces AgentBench, a systematic benchmark designed to evaluate LLMs as agents across a diverse set of environments. Given the increasing role of LLMs in real-world interactive tasks, assessing their ability to serve as intelligent agents has become crucial. AgentBench sets the foundation for evaluating these capabilities by providing a robust framework encompassing eight distinct environments.

Key Contributions

  1. Comprehensive Benchmark Design: AgentBench includes environments that test LLMs in various scenarios such as code, game, and web-based tasks. These are further categorized into specific environments like Operating Systems (OS), Databases (DB), Knowledge Graphs (KG), Digital Card Games (DCG), and Web Browsing (WB).
  2. Diverse Task Evaluation: By employing tasks like decision-making, instruction following, and multi-turn reasoning, this benchmark ensures the assessment of LLMs is thorough. It examines aspects like coding proficiency, logical reasoning, and strategic planning.
  3. Comparative Study of LLMs: The paper evaluates 27 different LLMs, both commercial API-based and open-sourced, revealing significant performance disparities. While models like GPT-4 exhibit advanced capabilities, many open-sourced models lag considerably.
  4. Insightful Error Analysis: The authors categorize reasons for task failures, such as Context Limit Exceeded (CLE) and Invalid Action (IA), providing insights into areas needing improvement. This analysis highlights the challenges in long-term reasoning and decision-making that current models face.
  5. Framework and Toolkit: AgentBench introduces a modular evaluation framework, enabling easy assessments through a server-client architecture. This design supports simultaneous evaluations of multiple models and tasks, enhancing usability for future research.

Numerical and Empirical Findings

The empirical findings reveal a stark contrast between top-tier commercial models and their open-sourced counterparts, with models like GPT-4 achieving an overall score of 4.01, compared to below 1.00 for many open-sourced models. The paper emphasizes the need for improvements, particularly in long-term reasoning and adherence to instruction formats.

Implications and Future Directions

The implications of this work are significant for both the theoretical understanding and practical deployment of LLMs as agents. By highlighting the potential and current limitations of LLMs, AgentBench sets the stage for ongoing research aimed at improving model alignment, reasoning strategies, and autonomous agent capabilities.

The findings suggest directions for enhancing performance, such as integrating high-quality alignment data and improving code training strategies. Future advancements in LLMs will likely focus on bridging the gaps identified, aiming for models that not only excel in task-specific benchmarks but also demonstrate robust generalist capabilities in multi-modal, real-world scenarios.

AgentBench positions itself as a cornerstone in the evaluation of LLM-as-Agent, providing a platform that can evolve alongside developments in AI, ensuring continued relevance and utility in assessing the growing capabilities of LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (22)
  1. Xiao Liu (402 papers)
  2. Hao Yu (195 papers)
  3. Hanchen Zhang (5 papers)
  4. Yifan Xu (92 papers)
  5. Xuanyu Lei (10 papers)
  6. Hanyu Lai (11 papers)
  7. Yu Gu (218 papers)
  8. Hangliang Ding (4 papers)
  9. Kaiwen Men (2 papers)
  10. Kejuan Yang (3 papers)
  11. Shudan Zhang (7 papers)
  12. Xiang Deng (43 papers)
  13. Aohan Zeng (19 papers)
  14. Zhengxiao Du (22 papers)
  15. Chenhui Zhang (16 papers)
  16. Sheng Shen (68 papers)
  17. Tianjun Zhang (38 papers)
  18. Yu Su (138 papers)
  19. Huan Sun (88 papers)
  20. Minlie Huang (225 papers)
Citations (189)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com