Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Answer Generation for Retrieval-based Question Answering Systems (2106.00955v1)

Published 2 Jun 2021 in cs.CL

Abstract: Recent advancements in transformer-based models have greatly improved the ability of Question Answering (QA) systems to provide correct answers; in particular, answer sentence selection (AS2) models, core components of retrieval-based systems, have achieved impressive results. While generally effective, these models fail to provide a satisfying answer when all retrieved candidates are of poor quality, even if they contain correct information. In AS2, models are trained to select the best answer sentence among a set of candidates retrieved for a given question. In this work, we propose to generate answers from a set of AS2 top candidates. Rather than selecting the best candidate, we train a sequence to sequence transformer model to generate an answer from a candidate set. Our tests on three English AS2 datasets show improvement up to 32 absolute points in accuracy over the state of the art.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chao-Chun Hsu (13 papers)
  2. Eric Lind (3 papers)
  3. Luca Soldaini (62 papers)
  4. Alessandro Moschitti (48 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.