Papers
Topics
Authors
Recent
2000 character limit reached

Boundary Value Test Input Generation Using Prompt Engineering with LLMs: Fault Detection and Coverage Analysis (2501.14465v1)

Published 24 Jan 2025 in cs.SE

Abstract: As software systems grow more complex, automated testing has become essential to ensuring reliability and performance. Traditional methods for boundary value test input generation can be time-consuming and may struggle to address all potential error cases effectively, especially in systems with intricate or highly variable boundaries. This paper presents a framework for assessing the effectiveness of LLMs in generating boundary value test inputs for white-box software testing by examining their potential through prompt engineering. Specifically, we evaluate the effectiveness of LLM-based test input generation by analyzing fault detection rates and test coverage, comparing these LLM-generated test sets with those produced using traditional boundary value analysis methods. Our analysis shows the strengths and limitations of LLMs in boundary value generation, particularly in detecting common boundary-related issues. However, they still face challenges in certain areas, especially when handling complex or less common test inputs. This research provides insights into the role of LLMs in boundary value testing, underscoring both their potential and areas for improvement in automated testing methods.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.