Automatic Answerability Evaluation for Question Generation (2309.12546v2)
Abstract: Conventional automatic evaluation metrics, such as BLEU and ROUGE, developed for natural language generation (NLG) tasks, are based on measuring the n-gram overlap between the generated and reference text. These simple metrics may be insufficient for more complex tasks, such as question generation (QG), which requires generating questions that are answerable by the reference answers. Developing a more sophisticated automatic evaluation metric, thus, remains an urgent problem in QG research. This work proposes PMAN (Prompting-based Metric on ANswerability), a novel automatic evaluation metric to assess whether the generated questions are answerable by the reference answers for the QG tasks. Extensive experiments demonstrate that its evaluation results are reliable and align with human evaluations. We further apply our metric to evaluate the performance of QG models, which shows that our metric complements conventional metrics. Our implementation of a GPT-based QG model achieves state-of-the-art performance in generating answerable questions.
- Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72.
- Learning to ask: Neural question generation for reading comprehension. arXiv preprint arXiv:1705.00106.
- Cqg: A simple and effective controlled generation framework for multi-hop question generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6896–6906.
- Multi-hop question generation using hierarchical encoding-decoding and context switch mechanism. Entropy, 23(11):1449.
- Qascore—an unsupervised unreferenced metric for the question generation evaluation. Entropy, 24(11):1514.
- Tom Kocmi and Christian Federmann. 2023. Large language models are state-of-the-art evaluators of translation quality. arXiv preprint arXiv:2302.14520.
- Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81.
- G-eval: Nlg evaluation using gpt-4 with better human alignment, may 2023. arXiv preprint arXiv:2303.16634.
- Preksha Nema and Mitesh M Khapra. 2018. Towards a better metric for evaluating question generation systems. arXiv preprint arXiv:1808.10192.
- OpenAI. 2022. Chatgpt: Optimizing language models for dialogue.
- Semantic graphs for generating deep questions. arXiv preprint arXiv:2004.12704.
- Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
- Multi-hop question generation with graph convolutional network. arXiv preprint arXiv:2010.09240.
- Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048.
- Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837.
- Exploring question-specific rewards for generating deep questions. arXiv preprint arXiv:2011.01102.
- Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600.
- Machine comprehension by text-to-text neural question generation. arXiv preprint arXiv:1705.02012.
- Neural question generation from text: A preliminary study. In Natural Language Processing and Chinese Computing: 6th CCF International Conference, NLPCC 2017, Dalian, China, November 8–12, 2017, Proceedings 6, pages 662–671. Springer.
- Zifan Wang (75 papers)
- Kotaro Funakoshi (8 papers)
- Manabu Okumura (41 papers)