Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Inquisitive Question Generation for High Level Text Comprehension (2010.01657v1)

Published 4 Oct 2020 in cs.CL

Abstract: Inquisitive probing questions come naturally to humans in a variety of settings, but is a challenging task for automatic systems. One natural type of question to ask tries to fill a gap in knowledge during text comprehension, like reading a news article: we might ask about background information, deeper reasons behind things occurring, or more. Despite recent progress with data-driven approaches, generating such questions is beyond the range of models trained on existing datasets. We introduce INQUISITIVE, a dataset of ~19K questions that are elicited while a person is reading through a document. Compared to existing datasets, INQUISITIVE questions target more towards high-level (semantic and discourse) comprehension of text. We show that readers engage in a series of pragmatic strategies to seek information. Finally, we evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions although the task is challenging, and highlight the importance of context to generate INQUISITIVE questions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wei-Jen Ko (11 papers)
  2. Te-Yuan Chen (1 paper)
  3. Yiyan Huang (9 papers)
  4. Greg Durrett (118 papers)
  5. Junyi Jessy Li (79 papers)
Citations (42)

Summary

We haven't generated a summary for this paper yet.