Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An approach for performance requirements verification and test environments generation (2403.00099v1)

Published 29 Feb 2024 in cs.SE

Abstract: Model-based testing (MBT) is a method that supports the design and execution of test cases by models that specify the intended behaviors of a system under test. While systematic literature reviews on MBT in general exist, the state of the art on modeling and testing performance requirements has seen much less attention. Therefore, we conducted a systematic mapping study on model-based performance testing. Then, we studied natural language software requirements specifications in order to understand which and how performance requirements are typically specified. Since none of the identified MBT techniques supported a major benefit of modeling, namely identifying faults in requirements specifications, we developed the Performance Requirements verificatiOn and Test EnvironmentS generaTion approach (PRO-TEST). Finally, we evaluated PRO-TEST on 149 requirements specifications. We found and analyzed 57 primary studies from the systematic mapping study and extracted 50 performance requirements models. However, those models don't achieve the goals of MBT, which are validating requirements, ensuring their testability, and generating the minimum required test cases. We analyzed 77 Software Requirements Specification (SRS) documents, extracted 149 performance requirements from those SRS, and illustrate that with PRO-TEST we can model performance requirements, find issues in those requirements and detect missing ones. We detected three not-quantifiable requirements, 43 not-quantified requirements, and 180 underspecified parameters in the 149 modeled performance requirements. Furthermore, we generated 96 test environments from those models. By modeling performance requirements with PRO-TEST, we can identify issues in the requirements related to their ambiguity, measurability, and completeness. Additionally, it allows to generate parameters for test environments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (78)
  1. In: Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering, ICPE ’13, pp. 423–424. Association for Computing Machinery (2013). DOI 10.1145/2479871.2479937
  2. In: 2010 Second International Conference on Advances in System Testing and Validation Lifecycle, pp. 125–128 (2010). DOI 10.1109/VALID.2010.22
  3. DOI 10.5281/zenodo.5715509. URL https://doi.org/10.5281/zenodo.5715509
  4. Advanced Robotics 31(22), 1159–1176 (2017). DOI 10.1080/01691864.2017.1396921
  5. Al-Qutaish, R.E.: Quality models in software engineering literature: An analytical and comparative study. Journal of American Science p. 10 (2010)
  6. In: Proceedings of the 6th International Conference on Model-Driven Engineering and Software Development, pp. 225–236. SCITEPRESS - Science and Technology Publications (2018). DOI 10.5220/0006609302250236
  7. Cambridge University Press (2016). Google-Books-ID: bQtQDQAAQBAJ
  8. IEEE Transactions on Software Engineering 30(5), 295–310 (2004). DOI 10.1109/TSE.2004.9. Conference Name: IEEE Transactions on Software Engineering
  9. In: 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 157–167 (2016). DOI 10.1109/ICST.2016.13
  10. Boehm, B.: Verifying and validating software requirements and design specifications. IEEE Software 1(1), 75–88 (1984). DOI 10.1109/MS.1984.233702
  11. Bondi, A.B.: Best practices for writing and managing performance requirements: a tutorial. In: Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering, ICPE ’12, pp. 1–8. Association for Computing Machinery (2012). DOI 10.1145/2188286.2188288
  12. In: A.B. Pidduck, M.T. Ozsu, J. Mylopoulos, C.C. Woo (eds.) Advanced Information Systems Engineering, Lecture Notes in Computer Science, pp. 706–710. Springer (2002). DOI 10.1007/3-540-47961-9˙50
  13. In: C. Barrett, M. Davies, T. Kahsai (eds.) NASA Formal Methods, Lecture Notes in Computer Science, pp. 115–130. Springer International Publishing (2017). DOI 10.1007/978-3-319-57288-8˙8
  14. In: Proceedings of the 6th International Workshop on Automation of Software Test, AST ’11, pp. 8–14. Association for Computing Machinery (2011). DOI 10.1145/1982595.1982598
  15. Springer Science & Business Media (2012). Google-Books-ID: MNrcBwAAQBAJ
  16. Clements, P.: Coming attractions in software architecture. In: Proceedings of 5th International Workshop on Parallel and Distributed Real-Time Systems and 3rd Workshop on Object-Oriented Real-Time Systems, pp. 2–9 (1997). DOI 10.1109/WPDRTS.1997.637857
  17. Coallier, F.: Software engineering–product quality–part 1: Quality model. International Organization for Standardization: Geneva, Switzerland (2001)
  18. In: Proceedings of the 1st ACM international workshop on Empirical assessment of software engineering languages and technologies: held in conjunction with the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE) 2007, WEASELTech ’07, pp. 31–36. Association for Computing Machinery (2007). DOI 10.1145/1353673.1353681
  19. In: M.V. Zelkowitz (ed.) Advances in Computers, Advances in Computers, vol. 80, pp. 45–120. Elsevier (2010). DOI 10.1016/S0065-2458(10)80002-6
  20. Dromey, R.: A model for software product quality. IEEE Transactions on Software Engineering 21(2), 146–162 (1995). DOI 10.1109/32.345830. Conference Name: IEEE Transactions on Software Engineering
  21. In: 2016 IEEE 24th International Requirements Engineering Conference (RE), pp. 46–55 (2016). DOI 10.1109/RE.2016.24. ISSN: 2332-6441
  22. Elmendorf, W.R.: Cause-effect graphs in functional testing. IBM Poughkeepsie Laboratory (1973)
  23. In: 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops, pp. 158–167 (2013). DOI 10.1109/ICSTW.2013.27
  24. Faedo, A.: Natural language requirements dataset. Institute of Information Science and Technologies. http://fmt.isti.cnr.it/nlreqdataset/. Accessed: 2019-02-08
  25. Software Testing, Verification and Reliability 26(2), 119–148 (2016). DOI 10.1002/stvr.1580
  26. In: 23rd International Workshop on Requirements Engineering Foundation for Software Quality Workshops (REFSQ), p. 6 (2017)
  27. In: Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2013, pp. 635–638. Association for Computing Machinery (2013). DOI 10.1145/2491411.2494579
  28. In: Proceedings of the 7th IEEE/ACM international conference on Hardware/software codesign and system synthesis, CODES+ISSS ’09, pp. 413–422. Association for Computing Machinery (2009). DOI 10.1145/1629435.1629492
  29. Garousi, V.: Fault-driven stress testing of distributed real-time software based on UML models. Software Testing, Verification and Reliability 21(2), 101–124 (2011). DOI 10.1002/stvr.418
  30. Journal of Systems and Software 86(5), 1354–1376 (2013). DOI 10.1016/j.jss.2012.12.051
  31. Prentice-Hall (1987)
  32. In: and Validation 2008 1st International Conference on Software Testing, Verification, pp. 367–376 (2008). DOI 10.1109/ICST.2008.9. ISSN: 2159-4848
  33. In: AGILE 2006 (AGILE’06), pp. 6 pp.–52 (2006). DOI 10.1109/AGILE.2006.41
  34. Hooda, R.V.: A future approach for model-based testing: Issues and guidelines. International Journal of Latest Research in Science and Technology 2(1), 541–543 (2013)
  35. In: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering - EASE ’14, pp. 1–10. ACM Press (2014). DOI 10.1145/2601248.2601257
  36. Software & Systems Modeling 14(1), 483–524 (2015). DOI 10.1007/s10270-013-0328-6
  37. ISO: Software product quality model - iso25010. https://iso25000.com/index.php/en/iso-25000-standards/iso-25010. Accessed: 2019-09-12
  38. In: 2011 9th IEEE International Conference on Industrial Informatics, pp. 871–876. IEEE (2011)
  39. In: 2017 IEEE 28th International Symposium on Software Reliability Engineering (ISSRE), pp. 271–281 (2017). DOI 10.1109/ISSRE.2017.31. ISSN: 2332-6549
  40. German Industry Standard (2004)
  41. In: 14th International Conference on Evaluation and Assessment in Software Engineering (EASE) (EASE) (2010). DOI 10.14236/ewic/EASE2010.4. Publisher: BCS Learning & Development
  42. In: V. Itsykson, A. Scedrov, V. Zakharov (eds.) Tools and Methods of Program Analysis, Communications in Computer and Information Science, pp. 77–89. Springer International Publishing (2018). DOI 10.1007/978-3-319-71734-0˙7
  43. In: 2018 International Conference on Internet of Things, Embedded Systems and Communications (IINTEC), pp. 171–176 (2018). DOI 10.1109/IINTEC.2018.8695299
  44. In: Proceedings of the 21st International Systems and Software Product Line Conference - Volume A, SPLC ’17, pp. 104–113. Association for Computing Machinery (2017). DOI 10.1145/3106195.3106204
  45. In: Verification and Validation 2010 Third International Conference on Software Testing, pp. 449–458 (2010). DOI 10.1109/ICST.2010.60. ISSN: 2159-4848
  46. In: 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops, pp. 144–153 (2013). DOI 10.1109/ICSTW.2013.25
  47. In: 2012 IEEE 36th Annual Computer Software and Applications Conference, pp. 371–371 (2012). DOI 10.1109/COMPSAC.2012.100. ISSN: 0730-3157
  48. In: 2012 IEEE 36th Annual Computer Software and Applications Conference Workshops, pp. 452–457 (2012). DOI 10.1109/COMPSACW.2012.86
  49. US Rome Air Development Center Reports, US Department of Commerce, USA (1977)
  50. Molyneaux, I.: The Art of Application Performance Testing: From Strategy to Tools. ”O’Reilly Media, Inc.” (2014-12-15). Google-Books-ID: 187UBQAAQBAJ
  51. https://www.cnet.com/news/for-pokemon-go-its-stop-at-least-temporarily/. Accessed: 2019-10-10
  52. John Wiley & Sons (2004)
  53. Nixon, B.: Management of performance requirements for information systems. IEEE Transactions on Software Engineering 26(12), 1122–1146 (2000). DOI 10.1109/32.888627. Conference Name: IEEE Transactions on Software Engineering
  54. Annals of Software Engineering 4(1), 133–157 (1997). DOI 10.1023/A:1018979130614
  55. In: 12th International Conference on Evaluation and Assessment in Software Engineering (EASE) (2008). DOI 10.14236/ewic/EASE2008.8. Publisher: BCS Learning & Development
  56. Information and Software Technology 64, 1–18 (2015). DOI 10.1016/j.infsof.2015.03.007
  57. In: M. Broy, B. Jonsson, J.P. Katoen, M. Leucker, A. Pretschner (eds.) Model-Based Testing of Reactive Systems, vol. 3472, pp. 439–461. Springer Berlin Heidelberg (2005). DOI 10.1007/11498490˙19. Series Title: Lecture Notes in Computer Science
  58. In: Proceedings of the 27th International Conference on Software engineering, pp. 392–401 (2005)
  59. In: 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST), pp. 1–8 (2015). DOI 10.1109/ICST.2015.7102628. ISSN: 2159-4848
  60. In: 2013 20th Asia-Pacific Software Engineering Conference (APSEC), vol. 2, pp. 152–158 (2013). DOI 10.1109/APSEC.2013.131. ISSN: 1530-1362
  61. Schieferdecker, I.: Model-based testing. IEEE Software 29(1), 14–18 (2012). DOI 10.1109/MS.2012.13. Conference Name: IEEE Software
  62. In: IFIP International Conference on Testing Software and Systems, pp. 293–310. Springer (2017)
  63. In: 2015 Annual IEEE Systems Conference (SysCon) Proceedings, pp. 96–102 (2015). DOI 10.1109/SYSCON.2015.7116735
  64. In: The 23rd International Conference on Software Engineering and Knowledge Engineering (SEKE), pp. 258–263 (2011)
  65. IEEE Transactions on Software Engineering 19(7), 720–741 (1993). DOI 10.1109/32.238572. Conference Name: IEEE Transactions on Software Engineering
  66. Addison-Wesley (2001). Google-Books-ID: X5VlQgAACAAJ
  67. ACM Transactions on Software Engineering and Methodology 27(3), 1–51 (2018). DOI 10.1145/3241743
  68. Technologies, C.: Classic cases where performance testing failures plagued large organizations. https://www.cigniti.com/blog/2-classic-cases-where-performance-testing-failures-plague-large-organisations/. Accessed: 2020-03-15
  69. Software Testing, Verification and Reliability 22(5), 297–312 (2012). DOI 10.1002/stvr.456
  70. Department of Computer Science, The University of Waikato, Hamilton, New Zealand (2006)
  71. In: ICTERI 2017 proceedings, p. 14 (2017)
  72. In: 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 299–309 (2017). DOI 10.1109/ICST.2017.34
  73. In: 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), vol. 2, pp. 60–65 (2017). DOI 10.1109/COMPSAC.2017.24. ISSN: 0730-3157
  74. Weyns, D.: Towards an integrated approach for validating qualities of self-adaptive systems. In: Proceedings of the Ninth International Workshop on Dynamic Analysis, WODA 2012, pp. 24–29. Association for Computing Machinery (2012). DOI 10.1145/2338966.2336803
  75. Wikipedia: Healthcare.gov. https://en.wikipedia.org/wiki/HealthCare.gov. Accessed: 2020-03-15
  76. In: J. Whittle, T. Clark, T. Kühne (eds.) Model Driven Engineering Languages and Systems, Lecture Notes in Computer Science, pp. 480–489. Springer (2011). DOI 10.1007/978-3-642-24485-8˙35
  77. Springer Science & Business Media (2012). Google-Books-ID: QPVsM1_U8nkC
  78. In: Future of Software Engineering (FOSE ’07), pp. 171–187 (2007). DOI 10.1109/FOSE.2007.32
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Waleed Abdeen (7 papers)
  2. Xingru Chen (14 papers)
  3. Michael Unterkalmsteiner (73 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com