TestSpark: IntelliJ IDEA's Ultimate Test Generation Companion (2401.06580v1)
Abstract: Writing software tests is laborious and time-consuming. To address this, prior studies introduced various automated test-generation techniques. A well-explored research direction in this field is unit test generation, wherein AI techniques create tests for a method/class under test. While many of these techniques have primarily found applications in a research context, existing tools (e.g., EvoSuite, Randoop, and AthenaTest) are not user-friendly and are tailored to a single technique. This paper introduces TestSpark, a plugin for IntelliJ IDEA that enables users to generate unit tests with only a few clicks directly within their Integrated Development Environment (IDE). Furthermore, TestSpark also allows users to easily modify and run each generated test and integrate them into the project workflow. TestSpark leverages the advances of search-based test generation tools, and it introduces a technique to generate unit tests using LLMs by creating a feedback cycle between the IDE and the LLM. Since TestSpark is an open-source (https://github.com/JetBrains-Research/TestSpark), extendable, and well-documented tool, it is possible to add new test generation methods into the plugin with the minimum effort. This paper also explains our future studies related to TestSpark and our preliminary results. Demo video: https://youtu.be/0F4PrxWfiXo
- 2021. Kex. https://github.com/vorpal-research/kex/tree/sbst-contest.
- 2022. Kex-Reflection. https://github.com/vorpal-research/kex/tree/sbst2022-reflection.
- 2022. UTBot. https://github.com/UnitTestBot.
- Andrea Arcuri. 2019. RESTful API automated test case generation with EvoMaster. ACM Transactions on Soft. Eng. and Methodology (TOSEM) 28, 1 (2019), 1–37.
- Code Generation Tools (Almost) for Free? A Study of Few-Shot, Pre-Trained Language Models on Code. CoRR abs/2206.01335 (2022).
- When, how, and why developers (do not) test in their IDEs. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. 179–190.
- SUSHI: a test generator for programs with complex structured inputs. In Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings. 21–24.
- Do automatically generated test cases make debugging easier? an experimental assessment of debugging effectiveness and efficiency. ACM Transactions on Soft. Eng. and Methodology (TOSEM) 25, 1 (2015), 1–38.
- Pouria Derakhshanfar and Xavier Devroey. 2022. Basic block coverage for unit test generation at the SBST 2022 tool competition. In Proceedings of the 15th Workshop on Search-Based Software Testing. 37–38.
- Generating Class-Level Integration Tests Using Call Site Information. IEEE Transactions on Software Engineering 49, 4 (2022), 2069–2087.
- Gordon Fraser and Andrea Arcuri. 2011. Evosuite: automatic test suite generation for object-oriented software. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering. 416–419.
- Gunel Jahangirova and Valerio Terragni. 2023. SBFT tool competition 2023-Java test case generation track. In 2023 IEEE/ACM International Workshop on Search-Based and Fuzz Testing (SBFT). IEEE, 61–64.
- Large language models are few-shot testers: Exploring llm-based general bug reproduction. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). IEEE, 2312–2323.
- CODAMOSA: Escaping coverage plateaus in test generation with pre-trained large language models. In International conference on software engineering (ICSE).
- Phil McMinn. 2011. Search-based software testing: Past, present and future. In 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops. IEEE, 153–163.
- Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Transactions on Software Engineering 44, 2 (2017), 122–158.
- A large scale empirical comparison of state-of-the-art search-based test case generators. Information and Software Technology 104 (2018), 236–256.
- An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation. IEEE Transactions on Software Engineering (2023).
- Do automatically generated unit tests find real faults? an empirical study of effectiveness and challenges (t). In 2015 30th IEEE/ACM Int. Conference on Automated Software Engineering (ASE). IEEE, 201–211.
- Unit test case generation with transformers and focal context. arXiv preprint arXiv:2009.05617 (2020).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.