Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Frustratingly Easy Natural Question Answering (1909.05286v1)

Published 11 Sep 2019 in cs.CL

Abstract: Existing literature on Question Answering (QA) mostly focuses on algorithmic novelty, data augmentation, or increasingly large pre-trained LLMs like XLNet and RoBERTa. Additionally, a lot of systems on the QA leaderboards do not have associated research documentation in order to successfully replicate their experiments. In this paper, we outline these algorithmic components such as Attention-over-Attention, coupled with data augmentation and ensembling strategies that have shown to yield state-of-the-art results on benchmark datasets like SQuAD, even achieving super-human performance. Contrary to these prior results, when we evaluate on the recently proposed Natural Questions benchmark dataset, we find that an incredibly simple approach of transfer learning from BERT outperforms the previous state-of-the-art system trained on 4 million more examples than ours by 1.9 F1 points. Adding ensembling strategies further improves that number by 2.3 F1 points.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Lin Pan (23 papers)
  2. Rishav Chakravarti (11 papers)
  3. Anthony Ferritto (10 papers)
  4. Michael Glass (20 papers)
  5. Alfio Gliozzo (28 papers)
  6. Salim Roukos (41 papers)
  7. Radu Florian (54 papers)
  8. Avirup Sil (45 papers)
Citations (14)