Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AgentQuest: A Modular Benchmark Framework to Measure Progress and Improve LLM Agents (2404.06411v1)

Published 9 Apr 2024 in cs.AI and cs.CL

Abstract: The advances made by LLMs have led to the pursuit of LLM agents that can solve intricate, multi-step reasoning tasks. As with any research pursuit, benchmarking and evaluation are key corner stones to efficient and reliable progress. However, existing benchmarks are often narrow and simply compute overall task success. To face these issues, we propose AgentQuest -- a framework where (i) both benchmarks and metrics are modular and easily extensible through well documented and easy-to-use APIs; (ii) we offer two new evaluation metrics that can reliably track LLM agent progress while solving a task. We exemplify the utility of the metrics on two use cases wherein we identify common failure points and refine the agent architecture to obtain a significant performance increase. Together with the research community, we hope to extend AgentQuest further and therefore we make it available under https://github.com/nec-research/agentquest.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. Clembench: Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents.
  2. Harrison Chase. 2022. LangChain - Building applications with LLMs through composability.
  3. Bertram Felgenhauer and Frazer Jarvis. 2006. Mathematics of Sudoku I. Mathematical Spectrum.
  4. CLIPScore: A Reference-free Evaluation Metric for Image Captioning.
  5. Plotting Progress in AI. Contextual AI Blog.
  6. Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet Physics Doklady.
  7. Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries.
  8. AgentBench: Evaluating LLMs as Agents.
  9. GAIA: a benchmark for General AI Assistants.
  10. OpenAI. 2023a. Assistants API.
  11. OpenAI. 2023b. GPT-4 Technical Report.
  12. Bleu: a Method for Automatic Evaluation of Machine Translation.
  13. Generative Agents: Interactive Simulacra of Human Behavior.
  14. Gorilla: Large Language Model Connected with Massive APIs.
  15. ALFWorld: Aligning Text and Embodied Environments for Interactive Learning.
  16. Paul Sloane. 1992. Lateral Thinking Puzzlers. Sterling Publishing Company, Inc.
  17. Jeff Stuckman and Guo-Qiang Zhang. 2005. Mastermind is NP-complete.
  18. Richard S Sutton and Andrew G Barto. 2018. Reinforcement Learning: An Introduction. MIT press.
  19. A Survey on Large Language Model based Autonomous Agents.
  20. Lilian Weng. 2023. LLM-powered Autonomous Agents.
  21. ReAct: Synergizing Reasoning and Acting in Language Models.
  22. BERTScore: Evaluating Text Generation with BERT.
  23. Judging LLM-as-a-judge with MT-Bench and Chatbot Arena.
  24. ToolQA: A Dataset for LLM Question Answering with External Tools.
Citations (5)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com