Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ArchivalQA: A Large-scale Benchmark Dataset for Open Domain Question Answering over Historical News Collections (2109.03438v4)

Published 8 Sep 2021 in cs.CL and cs.AI

Abstract: In the last few years, open-domain question answering (ODQA) has advanced rapidly due to the development of deep learning techniques and the availability of large-scale QA datasets. However, the current datasets are essentially designed for synchronic document collections (e.g., Wikipedia). Temporal news collections such as long-term news archives spanning several decades, are rarely used in training the models despite they are quite valuable for our society. To foster the research in the field of ODQA on such historical collections, we present ArchivalQA, a large question answering dataset consisting of 532,444 question-answer pairs which is designed for temporal news QA. We divide our dataset into four subparts based on the question difficulty levels and the containment of temporal expressions, which we believe are useful for training and testing ODQA systems characterized by different strengths and abilities. The novel QA dataset-constructing framework that we introduce can be also applied to generate non-ambiguous questions of good quality over other types of temporal document collections.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jiexin Wang (14 papers)
  2. Adam Jatowt (58 papers)
  3. Masatoshi Yoshikawa (45 papers)
Citations (27)