Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

QAScore -- An Unsupervised Unreferenced Metric for the Question Generation Evaluation (2210.04320v1)

Published 9 Oct 2022 in cs.CL

Abstract: Question Generation (QG) aims to automate the task of composing questions for a passage with a set of chosen answers found within the passage. In recent years, the introduction of neural generation models has resulted in substantial improvements of automatically generated questions in terms of quality, especially compared to traditional approaches that employ manually crafted heuristics. However, the metrics commonly applied in QG evaluations have been criticized for their low agreement with human judgement. We therefore propose a new reference-free evaluation metric that has the potential to provide a better mechanism for evaluating QG systems, called QAScore. Instead of fine-tuning a LLM to maximize its correlation with human judgements, QAScore evaluates a question by computing the cross entropy according to the probability that the LLM can correctly generate the masked words in the answer to that question. Furthermore, we conduct a new crowd-sourcing human evaluation experiment for the QG evaluation to investigate how QAScore and other metrics can correlate with human judgements. Experiments show that QAScore obtains a stronger correlation with the results of our proposed human evaluation method compared to existing traditional word-overlap-based metrics such as BLEU and ROUGE, as well as the existing pretrained-model-based metric BERTScore.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tianbo Ji (10 papers)
  2. Chenyang Lyu (44 papers)
  3. Gareth Jones (26 papers)
  4. Liting Zhou (8 papers)
  5. Yvette Graham (20 papers)
Citations (14)