KAT: Dependency-aware Automated API Testing with Large Language Models (2407.10227v1)
Abstract: API testing has increasing demands for software companies. Prior API testing tools were aware of certain types of dependencies that needed to be concise between operations and parameters. However, their approaches, which are mostly done manually or using heuristic-based algorithms, have limitations due to the complexity of these dependencies. In this paper, we present KAT (Katalon API Testing), a novel AI-driven approach that leverages the LLM GPT in conjunction with advanced prompting techniques to autonomously generate test cases to validate RESTful APIs. Our comprehensive strategy encompasses various processes to construct an operation dependency graph from an OpenAPI specification and to generate test scripts, constraint validation scripts, test cases, and test data. Our evaluation of KAT using 12 real-world RESTful services shows that it can improve test coverage, detect more undocumented status codes, and reduce false positives in these services in comparison with a state-of-the-art automated test generation tool. These results indicate the effectiveness of using the LLM for generating test scripts and data for API testing.
- Tri Le (7 papers)
- Thien Tran (3 papers)
- Duy Cao (2 papers)
- Vy Le (3 papers)
- Tien Nguyen (9 papers)
- Vu Nguyen (45 papers)