DocGenome: An Open Large-scale Scientific Document Benchmark for Training and Testing Multi-modal Large Language Models (2406.11633v2)
Abstract: Scientific documents record research findings and valuable human knowledge, comprising a vast corpus of high-quality data. Leveraging multi-modality data extracted from these documents and assessing large models' abilities to handle scientific document-oriented tasks is therefore meaningful. Despite promising advancements, large models still perform poorly on multi-page scientific document extraction and understanding tasks, and their capacity to process within-document data formats such as charts and equations remains under-explored. To address these issues, we present DocGenome, a structured document benchmark constructed by annotating 500K scientific documents from 153 disciplines in the arXiv open-access community, using our custom auto-labeling pipeline. DocGenome features four key characteristics: 1) Completeness: It is the first dataset to structure data from all modalities including 13 layout attributes along with their LaTeX source codes. 2) Logicality: It provides 6 logical relationships between different entities within each scientific document. 3) Diversity: It covers various document-oriented tasks, including document classification, visual grounding, document layout detection, document transformation, open-ended single-page QA and multi-page QA. 4) Correctness: It undergoes rigorous quality control checks conducted by a specialized team. We conduct extensive experiments to demonstrate the advantages of DocGenome and objectively evaluate the performance of large models on our benchmark.
- Renqiu Xia (16 papers)
- Song Mao (26 papers)
- Xiangchao Yan (15 papers)
- Hongbin Zhou (28 papers)
- Bo Zhang (633 papers)
- Haoyang Peng (6 papers)
- Jiahao Pi (4 papers)
- Daocheng Fu (22 papers)
- Wenjie Wu (31 papers)
- Hancheng Ye (17 papers)
- Shiyang Feng (5 papers)
- Bin Wang (751 papers)
- Chao Xu (283 papers)
- Conghui He (114 papers)
- Pinlong Cai (28 papers)
- Min Dou (22 papers)
- Botian Shi (57 papers)
- Sheng Zhou (186 papers)
- Yongwei Wang (24 papers)
- Junchi Yan (241 papers)