Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SAGA: Summarization-Guided Assert Statement Generation (2305.14808v1)

Published 24 May 2023 in cs.SE

Abstract: Generating meaningful assert statements is one of the key challenges in automated test case generation, which requires understanding the intended functionality of the tested code. Recently, deep learning-based models have shown promise in improving the performance of assert statement generation. However, existing models only rely on the test prefixes along with their corresponding focal methods, yet ignore the developer-written summarization. Based on our observations, the summarization contents usually express the intended program behavior or contain parameters that will appear directly in the assert statement. Such information will help existing models address their current inability to accurately predict assert statements. This paper presents a novel summarization-guided approach for automatically generating assert statements. To derive generic representations for natural language (i.e., summarization) and programming language (i.e., test prefixes and focal methods), we leverage a pre-trained LLM as the reference architecture and fine-tune it on the task of assert statement generation. To the best of our knowledge, the proposed approach makes the first attempt to leverage the summarization of focal methods as the guidance for making the generated assert statements more accurate. We demonstrate the effectiveness of our approach on two real-world datasets when compared with state-of-the-art models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yuwei Zhang (48 papers)
  2. Zhi Jin (160 papers)
  3. Zejun Wang (13 papers)
  4. Ying Xing (28 papers)
  5. Ge Li (213 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.