Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Conversational Answer Generation and Factuality for Reading Comprehension Question-Answering (2103.06500v1)

Published 11 Mar 2021 in cs.CL

Abstract: Question answering (QA) is an important use case on voice assistants. A popular approach to QA is extractive reading comprehension (RC) which finds an answer span in a text passage. However, extractive answers are often unnatural in a conversational context which results in suboptimal user experience. In this work, we investigate conversational answer generation for QA. We propose AnswerBART, an end-to-end generative RC model which combines answer generation from multiple passages with passage ranking and answerability. Moreover, a hurdle in applying generative RC are hallucinations where the answer is factually inconsistent with the passage text. We leverage recent work from summarization to evaluate factuality. Experiments show that AnswerBART significantly improves over previous best published results on MS MARCO 2.1 NLGEN by 2.5 ROUGE-L and NarrativeQA by 9.4 ROUGE-L.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Stan Peshterliev (6 papers)
  2. Barlas Oguz (36 papers)
  3. Debojeet Chatterjee (5 papers)
  4. Hakan Inan (8 papers)
  5. Vikas Bhardwaj (9 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.