Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Read before Generate! Faithful Long Form Question Answering with Machine Reading (2203.00343v1)

Published 1 Mar 2022 in cs.CL and cs.AI

Abstract: Long-form question answering (LFQA) aims to generate a paragraph-length answer for a given question. While current work on LFQA using large pre-trained model for generation are effective at producing fluent and somewhat relevant content, one primary challenge lies in how to generate a faithful answer that has less hallucinated content. We propose a new end-to-end framework that jointly models answer generation and machine reading. The key idea is to augment the generation model with fine-grained, answer-related salient information which can be viewed as an emphasis on faithful facts. State-of-the-art results on two LFQA datasets, ELI5 and MS MARCO, demonstrate the effectiveness of our method, in comparison with strong baselines on automatic and human evaluation metrics. A detailed analysis further proves the competency of our methods in generating fluent, relevant, and more faithful answers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dan Su (101 papers)
  2. Xiaoguang Li (71 papers)
  3. Jindi Zhang (6 papers)
  4. Lifeng Shang (90 papers)
  5. Xin Jiang (242 papers)
  6. Qun Liu (230 papers)
  7. Pascale Fung (150 papers)
Citations (51)