Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Recurrent Span Representations for Extractive Question Answering (1611.01436v2)

Published 4 Nov 2016 in cs.CL

Abstract: The reading comprehension task, that asks questions about a given evidence document, is a central problem in natural language understanding. Recent formulations of this task have typically focused on answer selection from a set of candidates pre-defined manually or through the use of an external NLP pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset in which the answers can be arbitrary strings from the supplied text. In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network. We show that scoring explicit span representations significantly improves performance over other approaches that factor the prediction into separate predictions about words or start and end markers. Our approach improves upon the best published results of Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s baseline by > 50%.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kenton Lee (40 papers)
  2. Shimi Salant (2 papers)
  3. Tom Kwiatkowski (21 papers)
  4. Ankur Parikh (9 papers)
  5. Dipanjan Das (42 papers)
  6. Jonathan Berant (107 papers)
Citations (149)

Summary

We haven't generated a summary for this paper yet.