AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks? (2407.15711v2)
Abstract: Language agents, built on top of LLMs (LMs), are systems that can interact with complex environments, such as the open web. In this work, we examine whether such agents can perform realistic and time-consuming tasks on the web, e.g., monitoring real-estate markets or locating relevant nearby businesses. We introduce AssistantBench, a challenging new benchmark consisting of 214 realistic tasks that can be automatically evaluated, covering different scenarios and domains. We find that AssistantBench exposes the limitations of current systems, including LLMs and retrieval-augmented LLMs, as no model reaches an accuracy of more than 26 points. While closed-book LMs perform well in terms of accuracy, they exhibit low precision and tend to hallucinate facts. State-of-the-art web agents reach a score of near zero. Additionally, we introduce SeePlanAct (SPA), a new web agent that significantly outperforms previous agents, and an ensemble of SPA and closed-book models reaches the best overall performance. Moreover, we analyze failures of current systems and highlight that open web navigation remains a major challenge.
- Ori Yoran (13 papers)
- Samuel Joseph Amouyal (5 papers)
- Chaitanya Malaviya (24 papers)
- Ben Bogin (22 papers)
- Ofir Press (21 papers)
- Jonathan Berant (107 papers)