Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Research Bench: Evaluating AI Web Research Agents (2506.06287v1)

Published 6 May 2025 in cs.AI

Abstract: Amongst the most common use cases of modern AI is LLM chat with web search enabled. However, no direct evaluations of the quality of web research agents exist that control for the continually-changing web. We introduce Deep Research Bench, consisting of 89 multi-step web research task instances of varying difficulty across 8 diverse task categories, with the answers carefully worked out by skilled humans. We provide a "RetroSearch" environment with a large frozen set of scraped web pages, and demonstrate that offline "RetroSearch" agents perform comparably to "live web" agents, enabling reliable evaluations of models over time. We provide robust agent tooling and scaffolding to benchmark major LLMs as they are released, including "thinking" models like o3 and Gemini 2.5 Pro. We include automated evaluations of the lengthy agent traces to report progress over time in hallucinations, tool use, and forgetting. Finally, we evaluate the major web research products branded as "Deep Research", "Deep Search", "Search", or "Research." Results are available on a public leaderboard at https://drb.futuresearch.ai/.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. FutureSearch (1 paper)
  2. : (643 papers)
  3. Nikos I. Bosse (5 papers)
  4. Jon Evans (1 paper)
  5. Robert G. Gambee (1 paper)
  6. Daniel Hnyk (1 paper)
  7. Peter Mühlbacher (6 papers)
  8. Lawrence Phillips (11 papers)
  9. Dan Schwarz (2 papers)
  10. Jack Wildman (1 paper)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com