Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DOCBENCH: A Benchmark for Evaluating LLM-based Document Reading Systems (2407.10701v1)

Published 15 Jul 2024 in cs.CL

Abstract: Recently, there has been a growing interest among LLM developers in LLM-based document reading systems, which enable users to upload their own documents and pose questions related to the document contents, going beyond simple reading comprehension tasks. Consequently, these systems have been carefully designed to tackle challenges such as file parsing, metadata extraction, multi-modal information understanding and long-context reading. However, no current benchmark exists to evaluate their performance in such scenarios, where a raw file and questions are provided as input, and a corresponding response is expected as output. In this paper, we introduce DocBench, a new benchmark designed to evaluate LLM-based document reading systems. Our benchmark involves a meticulously crafted process, including the recruitment of human annotators and the generation of synthetic questions. It includes 229 real documents and 1,102 questions, spanning across five different domains and four major types of questions. We evaluate both proprietary LLM-based systems accessible via web interfaces or APIs, and a parse-then-read pipeline employing open-source LLMs. Our evaluations reveal noticeable gaps between existing LLM-based document reading systems and human performance, underscoring the challenges of developing proficient systems. To summarize, DocBench aims to establish a standardized benchmark for evaluating LLM-based document reading systems under diverse real-world scenarios, thereby guiding future advancements in this research area.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Anni Zou (6 papers)
  2. Wenhao Yu (139 papers)
  3. Hongming Zhang (111 papers)
  4. Kaixin Ma (35 papers)
  5. Deng Cai (181 papers)
  6. Zhuosheng Zhang (125 papers)
  7. Hai Zhao (227 papers)
  8. Dong Yu (328 papers)
Citations (3)