Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards better Human-Agent Alignment: Assessing Task Utility in LLM-Powered Applications (2402.09015v3)

Published 14 Feb 2024 in cs.CL and cs.AI

Abstract: The rapid development in the field of LLMs has led to a surge in applications that facilitate collaboration among multiple agents to assist humans in their daily tasks. However, a significant gap remains in assessing whether LLM-powered applications genuinely enhance user experience and task execution efficiency. This highlights the pressing need for methods to verify utility of LLM-powered applications, particularly by ensuring alignment between the application's functionality and end-user needs. We introduce AgentEval provides an implementation for the math problems, a novel framework designed to simplify the utility verification process by automatically proposing a set of criteria tailored to the unique purpose of any given application. This allows for a comprehensive assessment, quantifying the utility of an application against the suggested criteria. We present a comprehensive analysis of the robustness of quantifier's work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Negar Arabzadeh (28 papers)
  2. Julia Kiseleva (33 papers)
  3. Qingyun Wu (47 papers)
  4. Chi Wang (93 papers)
  5. Ahmed Awadallah (27 papers)
  6. Victor Dibia (15 papers)
  7. Adam Fourney (16 papers)
  8. Charles Clarke (4 papers)
Citations (4)