Assessing and Verifying Task Utility in LLM-Powered Applications (2405.02178v2)
Abstract: The rapid development of LLMs has led to a surge in applications that facilitate collaboration among multiple agents, assisting humans in their daily tasks. However, a significant gap remains in assessing to what extent LLM-powered applications genuinely enhance user experience and task execution efficiency. This highlights the need to verify utility of LLM-powered applications, particularly by ensuring alignment between the application's functionality and end-user needs. We introduce AgentEval, a novel framework designed to simplify the utility verification process by automatically proposing a set of criteria tailored to the unique purpose of any given application. This allows for a comprehensive assessment, quantifying the utility of an application against the suggested criteria. We present a comprehensive analysis of the effectiveness and robustness of AgentEval for two open source datasets including Math Problem solving and ALFWorld House-hold related tasks. For reproducibility purposes, we make the data, code and all the logs publicly available at https://bit.ly/3w3yKcS .
- Negar Arabzadeh (28 papers)
- Nikhil Mehta (34 papers)
- Qinqyun Wu (1 paper)
- Chi Wang (93 papers)
- Ahmed Awadallah (27 papers)
- Charles L. A. Clarke (30 papers)
- Julia Kiseleva (33 papers)
- Siqing Huo (3 papers)