Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 186 tok/s Pro
GPT OSS 120B 446 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

TestSpark: IntelliJ IDEA's Ultimate Test Generation Companion (2401.06580v1)

Published 12 Jan 2024 in cs.SE

Abstract: Writing software tests is laborious and time-consuming. To address this, prior studies introduced various automated test-generation techniques. A well-explored research direction in this field is unit test generation, wherein AI techniques create tests for a method/class under test. While many of these techniques have primarily found applications in a research context, existing tools (e.g., EvoSuite, Randoop, and AthenaTest) are not user-friendly and are tailored to a single technique. This paper introduces TestSpark, a plugin for IntelliJ IDEA that enables users to generate unit tests with only a few clicks directly within their Integrated Development Environment (IDE). Furthermore, TestSpark also allows users to easily modify and run each generated test and integrate them into the project workflow. TestSpark leverages the advances of search-based test generation tools, and it introduces a technique to generate unit tests using LLMs by creating a feedback cycle between the IDE and the LLM. Since TestSpark is an open-source (https://github.com/JetBrains-Research/TestSpark), extendable, and well-documented tool, it is possible to add new test generation methods into the plugin with the minimum effort. This paper also explains our future studies related to TestSpark and our preliminary results. Demo video: https://youtu.be/0F4PrxWfiXo

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. 2021. Kex. https://github.com/vorpal-research/kex/tree/sbst-contest.
  2. 2022. Kex-Reflection. https://github.com/vorpal-research/kex/tree/sbst2022-reflection.
  3. 2022. UTBot. https://github.com/UnitTestBot.
  4. Andrea Arcuri. 2019. RESTful API automated test case generation with EvoMaster. ACM Transactions on Soft. Eng. and Methodology (TOSEM) 28, 1 (2019), 1–37.
  5. Code Generation Tools (Almost) for Free? A Study of Few-Shot, Pre-Trained Language Models on Code. CoRR abs/2206.01335 (2022).
  6. When, how, and why developers (do not) test in their IDEs. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. 179–190.
  7. SUSHI: a test generator for programs with complex structured inputs. In Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings. 21–24.
  8. Do automatically generated test cases make debugging easier? an experimental assessment of debugging effectiveness and efficiency. ACM Transactions on Soft. Eng. and Methodology (TOSEM) 25, 1 (2015), 1–38.
  9. Pouria Derakhshanfar and Xavier Devroey. 2022. Basic block coverage for unit test generation at the SBST 2022 tool competition. In Proceedings of the 15th Workshop on Search-Based Software Testing. 37–38.
  10. Generating Class-Level Integration Tests Using Call Site Information. IEEE Transactions on Software Engineering 49, 4 (2022), 2069–2087.
  11. Gordon Fraser and Andrea Arcuri. 2011. Evosuite: automatic test suite generation for object-oriented software. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering. 416–419.
  12. Gunel Jahangirova and Valerio Terragni. 2023. SBFT tool competition 2023-Java test case generation track. In 2023 IEEE/ACM International Workshop on Search-Based and Fuzz Testing (SBFT). IEEE, 61–64.
  13. Large language models are few-shot testers: Exploring llm-based general bug reproduction. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). IEEE, 2312–2323.
  14. CODAMOSA: Escaping coverage plateaus in test generation with pre-trained large language models. In International conference on software engineering (ICSE).
  15. Phil McMinn. 2011. Search-based software testing: Past, present and future. In 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops. IEEE, 153–163.
  16. Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Transactions on Software Engineering 44, 2 (2017), 122–158.
  17. A large scale empirical comparison of state-of-the-art search-based test case generators. Information and Software Technology 104 (2018), 236–256.
  18. An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation. IEEE Transactions on Software Engineering (2023).
  19. Do automatically generated unit tests find real faults? an empirical study of effectiveness and challenges (t). In 2015 30th IEEE/ACM Int. Conference on Automated Software Engineering (ASE). IEEE, 201–211.
  20. Unit test case generation with transformers and focal context. arXiv preprint arXiv:2009.05617 (2020).
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: