Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Question-Answering Approach to Evaluating Legal Summaries (2309.15016v2)

Published 26 Sep 2023 in cs.CL

Abstract: Traditional evaluation metrics like ROUGE compare lexical overlap between the reference and generated summaries without taking argumentative structure into account, which is important for legal summaries. In this paper, we propose a novel legal summarization evaluation framework that utilizes GPT-4 to generate a set of question-answer pairs that cover main points and information in the reference summary. GPT-4 is then used to generate answers based on the generated summary for the questions from the reference summary. Finally, GPT-4 grades the answers from the reference summary and the generated summary. We examined the correlation between GPT-4 grading with human grading. The results suggest that this question-answering approach with GPT-4 can be a useful tool for gauging the quality of the summary.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. Lin CY. Rouge: A package for automatic evaluation of summaries. In: Text summarization branches out; 2004. p. 74-81.
  2. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing; 2016. p. 2383-92.
  3. Question Answering as an Automatic Evaluation Metric for News Article Summarization. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers); 2019. p. 3938-48.
  4. Answers Unite! Unsupervised Metrics for Reinforced Summarization Models. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP); 2019. p. 3246-56.
  5. FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics; 2020. p. 5055-70.
  6. Gptscore: Evaluate as you desire. arXiv preprint arXiv:230204166. 2023.
  7. Is ChatGPT a Good NLG Evaluator? A Preliminary Study. arXiv e-prints. 2023:arXiv-2303.
  8. G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment, May 2023. arXiv preprint arXiv:230316634.
  9. Open-domain question answering goes conversational via question rewriting. arXiv preprint arXiv:201004898. 2020.
  10. TopiOCQA: Open-domain Conversational Question Answering with Topic Switching. Transactions of the Association for Computational Linguistics. 2022;10:468-83.
  11. Longformer: The long-document transformer. arXiv preprint arXiv:200405150. 2020.
  12. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics; 2020. p. 7871-80.
  13. Xu H, Ashley KD. Multi-granularity Argument Mining in Legal Texts. In: International Conference on Legal Knowledge and Information Systems; 2022. .
  14. Accounting for sentence position and legal domain sentence embedding in learning to classify case sentences. In: Legal Knowledge and Information Systems. IOS Press; 2021. p. 33-42.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Huihui Xu (9 papers)
  2. Kevin Ashley (4 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com