Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Question Generation with Sentence-level Semantic Matching and Answer Position Inferring (1912.00879v3)

Published 2 Dec 2019 in cs.CL, cs.AI, and cs.LG

Abstract: Taking an answer and its context as input, sequence-to-sequence models have made considerable progress on question generation. However, we observe that these approaches often generate wrong question words or keywords and copy answer-irrelevant words from the input. We believe that lacking global question semantics and exploiting answer position-awareness not well are the key root causes. In this paper, we propose a neural question generation model with two concrete modules: sentence-level semantic matching and answer position inferring. Further, we enhance the initial state of the decoder by leveraging the answer-aware gated fusion mechanism. Experimental results demonstrate that our model outperforms the state-of-the-art (SOTA) models on SQuAD and MARCO datasets. Owing to its generality, our work also improves the existing models significantly.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xiyao Ma (6 papers)
  2. Qile Zhu (8 papers)
  3. Yanlin Zhou (19 papers)
  4. Xiaolin Li (54 papers)
  5. Dapeng Wu (52 papers)
Citations (56)

Summary

We haven't generated a summary for this paper yet.