Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Natural Answer Generation: From Factoid Answer to Full-length Answer using Grammar Correction (2112.03849v1)

Published 7 Dec 2021 in cs.CL and cs.AI

Abstract: Question Answering systems these days typically use template-based language generation. Though adequate for a domain-specific task, these systems are too restrictive and predefined for domain-independent systems. This paper proposes a system that outputs a full-length answer given a question and the extracted factoid answer (short spans such as named entities) as the input. Our system uses constituency and dependency parse trees of questions. A transformer-based Grammar Error Correction model GECToR (2020), is used as a post-processing step for better fluency. We compare our system with (i) Modified Pointer Generator (SOTA) and (ii) Fine-tuned DialoGPT for factoid questions. We also test our approach on existential (yes-no) questions with better results. Our model generates accurate and fluent answers than the state-of-the-art (SOTA) approaches. The evaluation is done on NewsQA and SqUAD datasets with an increment of 0.4 and 0.9 percentage points in ROUGE-1 score respectively. Also the inference time is reduced by 85\% as compared to the SOTA. The improved datasets used for our evaluation will be released as part of the research contribution.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Manas Jain (2 papers)
  2. Sriparna Saha (48 papers)
  3. Pushpak Bhattacharyya (153 papers)
  4. Gladvin Chinnadurai (1 paper)
  5. Manish Kumar Vatsa (1 paper)
Citations (2)

Summary

We haven't generated a summary for this paper yet.