Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Assessing and Verifying Task Utility in LLM-Powered Applications (2405.02178v2)

Published 3 May 2024 in cs.CL and cs.AI

Abstract: The rapid development of LLMs has led to a surge in applications that facilitate collaboration among multiple agents, assisting humans in their daily tasks. However, a significant gap remains in assessing to what extent LLM-powered applications genuinely enhance user experience and task execution efficiency. This highlights the need to verify utility of LLM-powered applications, particularly by ensuring alignment between the application's functionality and end-user needs. We introduce AgentEval, a novel framework designed to simplify the utility verification process by automatically proposing a set of criteria tailored to the unique purpose of any given application. This allows for a comprehensive assessment, quantifying the utility of an application against the suggested criteria. We present a comprehensive analysis of the effectiveness and robustness of AgentEval for two open source datasets including Math Problem solving and ALFWorld House-hold related tasks. For reproducibility purposes, we make the data, code and all the logs publicly available at https://bit.ly/3w3yKcS .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Negar Arabzadeh (28 papers)
  2. Nikhil Mehta (34 papers)
  3. Qinqyun Wu (1 paper)
  4. Chi Wang (93 papers)
  5. Ahmed Awadallah (27 papers)
  6. Charles L. A. Clarke (30 papers)
  7. Julia Kiseleva (33 papers)
  8. Siqing Huo (3 papers)
Citations (3)