ReleaseEval: A Benchmark for Evaluating Language Models in Automated Release Note Generation (2511.02713v1)
Abstract: Automated release note generation addresses the challenge of documenting frequent software updates, where manual efforts are time-consuming and prone to human error. Although recent advances in LLMs further enhance this process, progress remains hindered by dataset limitations, including the lack of explicit licensing and limited reproducibility, and incomplete task design that relies mainly on commit messages for summarization while overlooking fine-grained contexts such as commit hierarchies and code changes. To fill this gap, we introduce ReleaseEval, a reproducible and openly licensed benchmark designed to systematically evaluate LLMs for automated release note generation. ReleaseEval comprises 94,987 release notes from 3,369 repositories across 6 programming languages, and supports three task settings with three levels of input granularity: (1) commit2sum, which generates release notes from commit messages; (2) tree2sum, which incorporates commit tree structures; and (3) diff2sum, which leverages fine-grained code diffs. Both automated and human evaluations show that LLMs consistently outperform traditional baselines across all tasks, achieving substantial gains on tree2sum, while still struggling on diff2sum. These findings highlight LLMs' proficiency in leveraging structured information while revealing challenges in abstracting from long code diffs.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.