Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications (1711.05073v4)

Published 14 Nov 2017 in cs.CL

Abstract: This paper introduces DuReader, a new large-scale, open-domain Chinese ma- chine reading comprehension (MRC) dataset, designed to address real-world MRC. DuReader has three advantages over previous MRC datasets: (1) data sources: questions and documents are based on Baidu Search and Baidu Zhidao; answers are manually generated. (2) question types: it provides rich annotations for more question types, especially yes-no and opinion questions, that leaves more opportunity for the research community. (3) scale: it contains 200K questions, 420K answers and 1M documents; it is the largest Chinese MRC dataset so far. Experiments show that human performance is well above current state-of-the-art baseline systems, leaving plenty of room for the community to make improvements. To help the community make these improvements, both DuReader and baseline systems have been posted online. We also organize a shared competition to encourage the exploration of more models. Since the release of the task, there are significant improvements over the baselines.

Overview of DuReader: A Chinese Machine Reading Comprehension Dataset

The paper presents DuReader, an extensive Chinese Machine Reading Comprehension (MRC) dataset aimed at advancing research in real-world MRC applications. DuReader is introduced with three notable features that differentiate it from prior MRC datasets: diverse data sources, a wide range of question types, and a larger scale. The dataset comprises 200,000 questions, 420,000 answers, and 1 million documents, which marks it as the largest Chinese MRC dataset available to date.

Key Contributions

  1. Data Sources: DuReader leverages data from Baidu Search and Baidu Zhidao, effectively combining search logs and community-based question answering content. This combination provides a more realistic set of questions and documents compared to other datasets that rely heavily on synthetic data or crowdsourced questions from limited sources.
  2. Question Types: The dataset includes a broader array of question types, specifically incorporating yes-no and opinion-based questions, which prior datasets have often neglected. This variety presents a significant challenge for MRC systems, promoting the development of models capable of dealing with more complex question formats beyond simple fact-based queries.
  3. Scale: As highlighted, DuReader's substantial scale encompasses 200,000 questions alongside extensively annotated answer sets, reinforcing its utility for training and evaluating models on a diverse range of queries.

Experimental Results and Insights

The paper demonstrates the performance of baseline systems on DuReader using well-established MRC models such as Match-LSTM and BiDAF. While these models showed strong performance improvements over simplistic baselines, they still fall short of human performance by a considerable margin. This discrepancy indicates the substantial complexity and challenge posed by DuReader, especially compared to datasets where state-of-the-art models have attained performances close to or surpassing human benchmarks.

In particular, the paper observes significant challenges in handling yes-no and opinion-based questions, pointing out the limitations of current span selection methods that are inadequate for summarizing answers based on multiple document sources. Furthermore, the document length and the coverage of multiple paragraphs per document present novel challenges in paragraph selection, which the current systems have not fully addressed, demanding more sophisticated paragraph ranking and selection strategies.

Future Implications

The paper suggests several directions for future research:

  • Development of Novel Models: New algorithms and architectures could be devised to tackle the extensive range of question types and the comprehensive nature of the dataset. Particular attention should be paid to multi-document summarization and opinion recognition, which are less explored in existing MRC frameworks.
  • Enhanced Evaluation Metrics: The paper proposes an innovative opinion-aware evaluation method for better assessing systems on yes-no questions, encouraging further exploration in developing metrics that capture the subtleties and complexities posed by real-world data.
  • Dataset Expansion: Future iterations of DuReader could incorporate additional annotations, such as opinion tagging for all types of questions, further enriching the dataset's utility.

The paper also notes a shared task organized to foster community engagement and stimulate progress in MRC research. The significant improvements observed since the release of the task underscore the dataset's potential to drive advances in building robust and comprehensive Chinese MRC systems.

In conclusion, DuReader takes a pivotal step in MRC research. By offering a dataset with broad coverage and realistic queries, it provides researchers with an invaluable resource to develop more capable and nuanced models, setting a new standard for MRC tasks within the Chinese language context.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Wei He (188 papers)
  2. Kai Liu (391 papers)
  3. Jing Liu (526 papers)
  4. Yajuan Lyu (16 papers)
  5. Shiqi Zhao (14 papers)
  6. Xinyan Xiao (41 papers)
  7. Yuan Liu (342 papers)
  8. Yizhong Wang (42 papers)
  9. Hua Wu (191 papers)
  10. Qiaoqiao She (9 papers)
  11. Xuan Liu (94 papers)
  12. Tian Wu (8 papers)
  13. Haifeng Wang (194 papers)
Citations (263)