Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Test Generation Strategies for Building Failure Models and Explaining Spurious Failures (2312.05631v1)

Published 9 Dec 2023 in cs.SE

Abstract: Test inputs fail not only when the system under test is faulty but also when the inputs are invalid or unrealistic. Failures resulting from invalid or unrealistic test inputs are spurious. Avoiding spurious failures improves the effectiveness of testing in exercising the main functions of a system, particularly for compute-intensive (CI) systems where a single test execution takes significant time. In this paper, we propose to build failure models for inferring interpretable rules on test inputs that cause spurious failures. We examine two alternative strategies for building failure models: (1) ML-guided test generation and (2) surrogate-assisted test generation. ML-guided test generation infers boundary regions that separate passing and failing test inputs and samples test inputs from those regions. Surrogate-assisted test generation relies on surrogate models to predict labels for test inputs instead of exercising all the inputs. We propose a novel surrogate-assisted algorithm that uses multiple surrogate models simultaneously, and dynamically selects the prediction from the most accurate model. We empirically evaluate the accuracy of failure models inferred based on surrogate-assisted and ML-guided test generation algorithms. Using case studies from the domains of cyber-physical systems and networks, we show that our proposed surrogate-assisted approach generates failure models with an average accuracy of 83%, significantly outperforming ML-guided test generation and two baselines. Further, our approach learns failure-inducing rules that identify genuine spurious failures as validated against domain knowledge.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (92)
  1. (Accessed: June 2023). Autopilot online benchmark. https://www.mathworks.com/matlabcentral/fileexchange/41490-autopilot-demo-for-arp4754a-do-178c-and-do-331?focused=6796756&tab=model
  2. (Accessed: June 2023). Benchmark for Simulink models. https://github.com/anonpaper23/testGenStrat/tree/main/Benchmark/Simulink%20Models
  3. (Accessed: June 2023)a. Code to generate results of each research questions. https://github.com/anonpaper23/testGenStrat/tree/main/Evaluation
  4. (Accessed: June 2023)a. Code to SoTA implementation for NTSS case study. https://github.com/anonpaper23/testGenStrat/blob/main/Code/NTSS/SoTA.py
  5. (Accessed: June 2023)b. Code to SoTA implementation for Simulink model case study. https://github.com/anonpaper23/testGenStrat/blob/main/Code/Simulink/Algorithms/decisiontreeSoTA.m
  6. (Accessed: June 2023). CPS and NTSS requirements. https://github.com/anonpaper23/testGenStrat/blob/main/Benchmark/Formalization/CPS_and_NTSS_Formalization.pdf
  7. (Accessed: June 2023). ENRICH – non-robustnEss aNalysis for tRaffIC sHaping. https://github.com/baharin/ENRICH
  8. (Accessed: June 2023)a. Figure 16 to Figure 21 – precision and recall results obtained by varying time budget in RQ2. https://github.com/anonpaper23/testGenStrat/blob/main/Supplementary_Material.pdf
  9. (Accessed: June 2023)b. Figure 9 – Comparing Dataset sizes for dynamic SA algorithm and seven individual SA algorithms in RQ1. https://github.com/anonpaper23/testGenStrat/blob/main/Supplementary_Material.pdf
  10. (Accessed: June 2023). Lockheed Martin. https://www.lockheedmartin.com
  11. (Accessed: June 2023). Logistic Regression. http://faculty.cas.usf.edu/mbrannick/regression/Logistic.html
  12. (Accessed: June 2023). OpenWrt. www.openwrt.org
  13. (Accessed: June 2023). Raw datasets obtained from each algorithm for CPS and NTSS. https://github.com/anonpaper23/testGenStrat/tree/main/Data/Dataset
  14. (Accessed: June 2023)c. Replication package of Alhazen framework. https://zenodo.org/records/3902142
  15. (Accessed: June 2023)b. Results of each research question. https://github.com/anonpaper23/testGenStrat/tree/main/Evaluation%20Results
  16. (Accessed: June 2023). Results of statistical analysis. https://github.com/anonpaper23/testGenStrat/blob/main/Evaluation%20Results/RQ2/RQ2StatisticalResults.xlsx
  17. (Accessed: June 2023). Rules obtained for each CI subject. https://github.com/anonpaper23/testGenStrat/blob/main/Evaluation%20Results/RQ4/APandNTSS_Rules.xlsx
  18. (Accessed: June 2023). Source codes of algorithms for CPS and NTSS. https://github.com/anonpaper23/testGenStrat/tree/main/Code
  19. (Accessed: June 2023)a. Table 15 to Table 20 – average accuracy, recall and precision over all runs of algorithms by varying execution time budget in RQ2. https://github.com/anonpaper23/testGenStrat/blob/main/Supplementary_Material.pdf
  20. (Accessed: June 2023)b. Table 21 to Table 24 – full set of rules obtained for NTSS, AP1, AP2 and AP3 in RQ4. https://github.com/anonpaper23/testGenStrat/blob/main/Supplementary_Material.pdf
  21. (Accessed: June 2023)d. Table 3 – Parameter names, descriptions and values used by SoTA. https://github.com/anonpaper23/testGenStrat/blob/main/Supplementary_Material.pdf
  22. (Accessed: June 2023)c. Table 5 – Time budgets given to non-CI subjects in RQ1. https://github.com/anonpaper23/testGenStrat/blob/main/Supplementary_Material.pdf
  23. (Accessed: June 2023)d. Table 6 – Statistical tests for dataset size and percentage of incorrect labels over dataset size in RQ1. https://github.com/anonpaper23/testGenStrat/blob/main/Supplementary_Material.pdf
  24. (Accessed: June 2023)e. Table 7 – Time budget considered for CI subjects in RQ2. https://github.com/anonpaper23/testGenStrat/blob/main/Supplementary_Material.pdf
  25. (Accessed: June 2023)f. Table 8 – maximum number of test executions for non-CI subjects in RQ2. https://github.com/anonpaper23/testGenStrat/blob/main/Supplementary_Material.pdf
  26. (Accessed: June 2023)g. Table 9 to Table 14 – statistical tests for accuracy, recall and precision by varying execution time budget in RQ2. https://github.com/anonpaper23/testGenStrat/blob/main/Supplementary_Material.pdf
  27. (Accessed: June 2023). tc-cake. https://man7.org/linux/man-pages/man8/tc-cake.8.html
  28. S-TaLiRo: A Tool for Temporal Logic Falsification for Hybrid Systems. In Tools and Algorithms for the Construction and Analysis of Systems, Parosh Aziz Abdulla and K. Rustan M. Leino (Eds.). Springer, 254–257.
  29. Pareto efficient multi-objective black-box test case selection for simulation-based testing. Information and Software Technology (2019).
  30. Search-based test case generation for cyber-physical systems. In 2017 IEEE Congress on Evolutionary Computation (CEC). IEEE, 688–697.
  31. Federal Aviation Administration (FAA)/Aviation Supplies & Academics (ASA). 2009. Advanced Avionics Handbook. Aviation Supplies & Academics, Incorporated. https://books.google.lu/books?id=2xGuPwAACAAJ
  32. NAUTILUS: Fishing for Deep Bugs with Grammars.. In NDSS.
  33. Synthesizing program input grammars. ACM SIGPLAN Notices 52, 6 (2017), 95–110.
  34. Testing of autonomous vehicles using surrogate models and stochastic optimization. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 1–6.
  35. Testing advanced driver assistance systems using multi-objective search and neural networks. In Proceedings of the 31st IEEE/ACM international conference on automated software engineering. 63–74.
  36. Human-in-the-loop automatic program repair. In 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST). IEEE, 274–285.
  37. Planning for untangling: Predicting the difficulty of merge conflicts. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. 801–811.
  38. Devendra K Chaturvedi. 2017. Modeling and simulation of systems using MATLAB® and Simulink®. CRC press.
  39. SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research 16 (2002), 321–357.
  40. William W Cohen. 1995. Fast effective rule induction. In Machine learning proceedings 1995. Elsevier, 115–123.
  41. Leonardo De Moura and Nikolaj Bjørner. 2008. Z3: An efficient SMT solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 337–340.
  42. Cyber-physical system design contracts. In Proceedings of the ACM/IEEE 4th International Conference on Cyber-Physical Systems. 109–118.
  43. A review of surrogate assisted multiobjective evolutionary algorithms. Computational intelligence and neuroscience 2016 (2016).
  44. A novel surrogate-assisted evolutionary algorithm applied to partition-based ensemble learning. In Proceedings of the Genetic and Evolutionary Computation Conference. 583–591.
  45. Robert Feldt and Shin Yoo. 2020. Flexible Probabilistic Modeling for Search Based Test Data Generation. In Proceedings of the 13th International Workshop on Search-Based Software Testing (SBST). 537–540.
  46. Building ensembles of surrogates by optimal convex combination. Bioinspired optimization methods and their applications (2016), 131–143.
  47. Mining assumptions for software components using machine learning. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 159–171.
  48. Revisiting the impact of classification techniques on the performance of defect prediction models. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 1. IEEE, 789–800.
  49. Assumption generation for software component verification. In International Conference on Automated Software Engineering. IEEE, 3–12.
  50. Automated formalization of structured natural language requirements. Information and Software Technology 137 (2021), 106590. https://doi.org/10.1016/j.infsof.2021.106590
  51. Abstracting failure-inducing inputs. In Proceedings of the 29th ACM SIGSOFT international symposium on software testing and analysis. 237–248.
  52. Kenneth V. Hanford. 1970. Automatic generation of test cases. IBM Systems Journal 9, 4 (1970), 242–257.
  53. Can Offline Testing of Deep Neural Networks Replace Their Online Testing? A Case Study of Automated Driving Systems. Empirical Software Engineering 26, 5 (2021), 90.
  54. Optimizing for the number of tests generated in search based test data generation with an application to the oracle cost problem. In 2010 Third International Conference on Software Testing, Verification, and Validation Workshops. IEEE, 182–191.
  55. Mark Harman and Phil McMinn. 2009. A theoretical and empirical study of search-based testing: Local, global, and hybrid search. IEEE Transactions on Software Engineering 36, 2 (2009), 226–247.
  56. You assume, we guarantee: Methodology and case studies. In Computer Aided Verification: 10th International Conference, CAV’98 Vancouver, BC, Canada, June 28–July 2, 1998 Proceedings 10. Springer, 440–451.
  57. Piece of CAKE: a comprehensive queue management solution for home gateways. In 2018 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN). IEEE, 37–42.
  58. A novel surrogate-model based active learning method for structural reliability analysis. Computer Methods in Applied Mechanics and Engineering 394 (2022), 114835. https://doi.org/10.1016/j.cma.2022.114835
  59. If a Human Can See It, so Should Your System: Reliability Requirements for Machine Vision Components. In Proceedings of the 44th International Conference on Software Engineering (Pittsburgh, Pennsylvania) (ICSE ’22). Association for Computing Machinery, New York, NY, USA, 1145–1156. https://doi.org/10.1145/3510003.3510109
  60. Data driven testing of cyber physical systems. In 2021 IEEE/ACM 14th International Workshop on Search-Based Software Testing (SBST). IEEE, 16–19.
  61. A search-based framework for automatic generation of testing environments for cyber-physical systems. Information and Software Technology (2022), 106936.
  62. Yaochu Jin. 2005. A comprehensive survey of fitness approximation in evolutionary computation. Soft computing 9, 1 (2005), 3–12.
  63. Yaochu Jin and Bernhard Sendhoff. 2002. Fitness Approximation In Evolutionary Computation-a Survey.. In GECCO, Vol. 2. 1105–12.
  64. Learning Non-robustness using Simulation-based Testing: a Network Traffic-shaping Case Study. In 2023 IEEE Conference on Software Testing, Verification and Validation (ICST). IEEE, 386–397.
  65. When does my program do this? learning circumstances of software behavior. In Proceedings of the 28th ACM joint meeting on european software engineering conference and symposium on the foundations of software engineering. 1228–1239.
  66. Human-in-the-loop oracle learning for semantic bugs in string processing programs. In Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis. 215–226.
  67. Generating valid grammar-based test inputs by means of genetic programming and annotated grammars. Empirical Software Engineering 22, 2 (2017), 928–961.
  68. Learning Highly Recursive Input Grammars. In 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 456–467.
  69. Estimating Probabilistic Safe WCET Ranges of Real-Time Systems at Design Stages. ACM Transactions on Software Engineering and Methodology (2022).
  70. Sean Luke. 2013. Essentials of Metaheuristics (second ed.). Lulu. http://cs.gmu.edu/∼similar-to\sim∼sean/book/metaheuristics/.
  71. Automated testing of hybrid Simulink/Stateflow controllers: industrial case studies. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2017, Paderborn, Germany, September 4-8, 2017. ACM, 938–943.
  72. Patrick E McKnight and Julius Najab. 2010. Mann-Whitney U Test. The Corsini encyclopedia of psychology (2010), 1–1.
  73. Approximation-refinement testing of compute-intensive cyber-physical models: An approach based on system identification. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). IEEE, 372–384.
  74. Generating automated and online test oracles for simulink models with continuous and uncertain behaviors. In Proceedings of the 2019 27th acm joint meeting on european software engineering conference and symposium on the foundations of software engineering. 27–38.
  75. An empirical study of the reliability of UNIX utilities. Commun. ACM 33, 12 (1990), 32–44.
  76. Christoph Molnar. 2020. Interpretable machine learning. Lulu. com.
  77. Evaluating model testing and model checking for finding requirements violations in Simulink models. In Proceedings of the 2019 27th acm joint meeting on european software engineering conference and symposium on the foundations of software engineering. 1015–1025.
  78. Reflections on Surrogate-Assisted Search-Based Testing: A Taxonomy and Two Replication Studies based on Industrial ADAS and Simulink Models. Inf. Softw. Technol. 163 (2023), 107286.
  79. Andrew Ng. 2018. Machine learning yearning. Available: http://www.mlyearning.org/ (2018).
  80. Empirical study on malicious URL detection using machine learning. In International Conference on Distributed Computing and Internet Technology. Springer, 380–388.
  81. Vincenzo Riccio and Paolo Tonella. 2022. When and Why Test Generators for Deep Learning Produce Invalid Inputs: an Empirical Study. arXiv:2212.11368 [cs.SE]
  82. Taming Dr. Frankenstein: Contract-based design for cyber-physical systems. European journal of control 18, 3 (2012), 217–238.
  83. Documenting Simulink Designs of Embedded Systems. In International Conference on Model Driven Engineering Languages and Systems (MODELS): Companion Proceedings. ACM, 47–51.
  84. Practical bayesian optimization of machine learning algorithms. Advances in neural information processing systems 25 (2012).
  85. Mean opinion score (MOS) revisited: methods and applications, limitations and alternatives. Multimedia Systems 22, 2 (2016), 213–227.
  86. Surrogate models in evolutionary single-objective optimization: A new taxonomy and experimental study. Information Sciences 562 (2021), 414–437.
  87. Requirements-driven test generation for autonomous vehicles with machine learning components. IEEE Transactions on Intelligent Vehicles 5, 2 (2019), 265–280.
  88. András Vargha and Harold D Delaney. 2000. A critique and improvement of the CL common language effect size statistics of McGraw and Wong. Journal of Educational and Behavioral Statistics 25, 2 (2000), 101–132.
  89. Superion: Grammar-aware greybox fuzzing. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). IEEE, 724–735.
  90. A systematic review of fuzzing based on machine learning techniques. PloS one 15, 8 (2020), e0237749.
  91. Data Mining: Practical Machine Learning Tools and Techniques (3 ed.). Morgan Kaufmann, Amsterdam. http://www.sciencedirect.com/science/book/9780123748560
  92. An ensemble of adaptive surrogate models based on local error expectations. Mathematical Problems in Engineering 2021 (2021).
Citations (4)

Summary

We haven't generated a summary for this paper yet.