Less is More for Improving Automatic Evaluation of Factual Consistency (2404.06579v1)
Abstract: Assessing the factual consistency of automatically generated texts in relation to source context is crucial for developing reliable natural language generation applications. Recent literature proposes AlignScore which uses a unified alignment model to evaluate factual consistency and substantially outperforms previous methods across many benchmark tasks. In this paper, we take a closer look of datasets used in AlignScore and uncover an unexpected finding: utilizing a smaller number of data points can actually improve performance. We process the original AlignScore training dataset to remove noise, augment with robustness-enhanced samples, and utilize a subset comprising 10\% of the data to train an improved factual consistency evaluation model, we call LIM-RA (Less Is More for Robust AlignScore). LIM-RA demonstrates superior performance, consistently outperforming AlignScore and other strong baselines like ChatGPT across four benchmarks (two utilizing traditional natural language generation datasets and two focused on LLM outputs). Our experiments show that LIM-RA achieves the highest score on 24 of the 33 test datasets, while staying competitive on the rest, establishing the new state-of-the-art benchmarks.
- Evaluating factual consistency of summaries with large language models. arXiv preprint arXiv:2305.14069.
- Bamboo: A comprehensive benchmark for evaluating long text modeling capacities of large language models. arXiv preprint arXiv:2309.13345.
- Qafacteval: Improved qa-based factual consistency evaluation for summarization. arXiv preprint arXiv:2112.08542.
- Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 2214–2220.
- Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166.
- Are large language models reliable judges? a study on the factuality evaluation capabilities of llms. arXiv preprint arXiv:2311.00681.
- Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. arXiv preprint arXiv:2111.09543.
- Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear.
- True: Re-evaluating factual consistency evaluation. arXiv preprint arXiv:2204.04991.
- Zero-shot faithfulness evaluation for text summarization with foundation language model. arXiv preprint arXiv:2310.11648.
- Mistral 7b. arXiv preprint arXiv:2310.06825.
- Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles.
- Llms as factual reasoners: Insights from existing benchmarks and beyond. arXiv preprint arXiv:2305.14540.
- Summac: Re-visiting nli-based models for inconsistency detection in summarization. Transactions of the Association for Computational Linguistics, 10:163–177.
- Less annotating, more classifying: Addressing the data scarcity issue of supervised machine learning with deep transfer learning and bert-nli. Political Analysis, 32(1):84–100.
- Halueval: A large-scale hallucination evaluation benchmark for large language models.
- G-eval: Nlg evaluation using gpt-4 with better human alignment, may 2023. arXiv preprint arXiv:2303.16634.
- Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
- Chatgpt as a factual inconsistency evaluator for text summarization.
- Adversarial nli: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599.
- Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.
- Questeval: Summarization asks for fact-based evaluation. arXiv preprint arXiv:2103.12693.
- A new benchmark and reverse validation method for passage-level hallucination detection. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 3898–3908, Singapore. Association for Computational Linguistics.
- Docnli: A large-scale dataset for document-level natural language inference. arXiv preprint arXiv:2106.09449.
- Automatic evaluation of attribution by large language models. arXiv preprint arXiv:2305.06311.
- Alignscore: Evaluating factual consistency with a unified alignment function. arXiv preprint arXiv:2305.16739.