Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 81 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 195 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Identifying Flakiness in Quantum Programs (2302.03256v2)

Published 7 Feb 2023 in cs.SE

Abstract: In recent years, software engineers have explored ways to assist quantum software programmers. Our goal in this paper is to continue this exploration and see if quantum software programmers deal with some problems plaguing classical programs. Specifically, we examine whether intermittently failing tests, i.e., flaky tests, affect quantum software development. To explore flakiness, we conduct a preliminary analysis of 14 quantum software repositories. Then, we identify flaky tests and categorize their causes and methods of fixing them. We find flaky tests in 12 out of 14 quantum software repositories. In these 12 repositories, the lower boundary of the percentage of issues related to flaky tests ranges between 0.26% and 1.85% per repository. We identify 46 distinct flaky test reports with 8 groups of causes and 7 common solutions. Further, we notice that quantum programmers are not using some of the recent flaky test countermeasures developed by software engineers. This work may interest practitioners, as it provides useful insight into the resolution of flaky tests in quantum programs. Researchers may also find the paper helpful as it offers quantitative data on flaky tests in quantum software and points to new research opportunities.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. Q. Luo, F. Hariri, L. Eloussi, and D. Marinov, “An empirical analysis of flaky tests,” in Proceedings of the 22nd ACM SIGSOFT international symposium on foundations of software engineering, 2014, pp. 643–653.
  2. J. Micco, “The state of continuous integration testing @google,” 2017. [Online]. Available: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/45880.pdf
  3. A. Memon, Z. Gao, B. Nguyen, S. Dhanda, E. Nickell, R. Siemborski, and J. Micco, “Taming google-scale continuous testing,” in Proceedings of the 2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP).   IEEE, 2017, pp. 233–242.
  4. M. Eck, F. Palomba, M. Castelluccio, and A. Bacchelli, “Understanding flaky tests: The developer’s perspective,” in Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2019, pp. 830–840.
  5. O. Parry, G. M. Kapfhammer, M. Hilton, and P. McMinn, “A survey of flaky tests,” ACM Transactions on Software Engineering and Methodology (TOSEM), vol. 31, no. 1, pp. 1–74, 2021.
  6. M. Gruber, S. Lukasczyk, F. Kroiß, and G. Fraser, “An empirical study of flaky tests in python,” in Proceedings of the 2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST).   IEEE, 2021, pp. 148–158.
  7. S. Dutta, A. Shi, R. Choudhary, Z. Zhang, A. Jain, and S. Misailovic, “Detecting flaky tests in probabilistic and machine learning applications,” in Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, 2020, pp. 211–224.
  8. S. Dutta, A. Shi, and S. Misailovic, “Flex: fixing flaky tests in machine learning projects by updating assertion bounds,” in Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2021, pp. 603–614.
  9. M. Treinish, J. Gambetta et al., “Qiskit/qiskit: Qiskit 0.39.5,” Jan. 2023. [Online]. Available: https://doi.org/10.5281/zenodo.7545230
  10. Microsoft, “Q# and the quantum development kit,” 2023. [Online]. Available: https://azure.microsoft.com/en-us/resources/development-kit/quantum-computing
  11. TensorFlow, “TensorFlow quantum,” 2023. [Online]. Available: https://www.tensorflow.org/quantum
  12. NetKet, “NetKet - the machine learning toolbox for quantum physics,” 2023. [Online]. Available: https://www.netket.org/
  13. B. Kraus and J. I. Cirac, “Optimal creation of entanglement using a two-qubit gate,” Phys. Rev. A, vol. 63, p. 062309, May 2001. [Online]. Available: https://link.aps.org/doi/10.1103/PhysRevA.63.062309
  14. R. J. Wieringa and M. Daneva, “Six strategies for generalizing software engineering theories,” Science of computer programming, vol. 101, pp. 136–152, 4 2015.
  15. A. Miranskyy and L. Zhang, “On testing quantum programs,” in Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER).   IEEE, 2019, pp. 57–60.
  16. A. Miranskyy, L. Zhang, and J. Doliskani, “Is your quantum program bug-free?” in Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: New Ideas and Emerging Results, ser. ICSE-NIER ’20.   ACM, 2020, p. 29–32.
  17. ——, “On testing and debugging quantum software,” arXiv preprint arXiv:2103.09172, 2021.
  18. J. Zhao, “Quantum software engineering: Landscapes and horizons,” arXiv preprint arXiv:2007.07047, 2020.
  19. M. De Stefano, F. Pecorelli, D. Di Nucci, F. Palomba, and A. De Lucia, “Software engineering for quantum programming: How far are we?” Journal of Systems and Software, vol. 190, p. 111326, 2022.
  20. Y. Huang and M. Martonosi, “Statistical assertions for validating patterns and finding bugs in quantum programs,” in Proceedings of the 46th International Symposium on Computer Architecture, ser. ISCA’19.   Association for Computing Machinery, 2019, p. 541–553.
  21. G. Li, L. Zhou, N. Yu, Y. Ding, M. Ying, and Y. Xie, “Projection-based runtime assertions for testing and debugging quantum programs,” Proceedings of the ACM on Programming Languages, vol. 4, no. OOPSLA, pp. 150:1–150:29, 2020.
  22. J. Liu, G. T. Byrd, and H. Zhou, “Quantum circuits for dynamic runtime assertions in quantum computation,” in Proceedings of the 25th International Conference on Architectural Support for Programming Languages and Operating Systems, ser. ASPLOS’20.   Association for Computing Machinery, 2020, p. 1017–1030.
  23. S. Ali, P. Arcaini, X. Wang, and T. Yue, “Assessing the effectiveness of input and output coverage criteria for testing quantum programs,” in 2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST).   IEEE, 2021, pp. 13–23.
  24. P. Zhao, J. Zhao, and L. Ma, “Identifying bug patterns in quantum programs,” in Proceedings of the 2021 IEEE/ACM 2nd International Workshop on Quantum Software Engineering (Q-SE).   IEEE, 2021, pp. 16–21.
  25. J. Wang, M. Gao, Y. Jiang, J. Lou, Y. Gao, D. Zhang, and J. Sun, “Quanfuzz: Fuzz testing of quantum program,” arXiv preprint arXiv:1810.10310, 2018.
  26. S. Honarvar, M. R. Mousavi, and R. Nagarajan, “Property-based testing of quantum programs in q#,” in Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops, 2020, pp. 430–435.
  27. M. P. Usaola, “Quantum software testing,” in Short Papers Proceedings of the 1st International Workshop on the QuANtum SoftWare Engineering & pRogramming, ser. CEUR Workshop Proceedings.   CEUR-WS.org, 2020, pp. 57–63.
Citations (5)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube