Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TEASMA: A Practical Methodology for Test Adequacy Assessment of Deep Neural Networks (2308.01311v4)

Published 2 Aug 2023 in cs.SE

Abstract: Successful deployment of Deep Neural Networks (DNNs) requires their validation with an adequate test set to ensure a sufficient degree of confidence in test outcomes. Although well-established test adequacy assessment techniques have been proposed for DNNs, we still need to investigate their application within a comprehensive methodology for accurately predicting the fault detection ability of test sets and thus assessing their adequacy. In this paper, we propose and evaluate TEASMA, a comprehensive and practical methodology designed to accurately assess the adequacy of test sets for DNNs. In practice, TEASMA allows engineers to decide whether they can trust high-accuracy test results and thus validate the DNN before its deployment. Based on a DNN model's training set, TEASMA provides a procedure to build accurate DNN-specific prediction models of the Fault Detection Rate (FDR) of a test set using an existing adequacy metric, thus enabling its assessment. We evaluated TEASMA with four state-of-the-art test adequacy metrics: Distance-based Surprise Coverage (DSC), Likelihood-based Surprise Coverage (LSC), Input Distribution Coverage (IDC), and Mutation Score (MS). Our extensive empirical evaluation across multiple DNN models and input sets such as ImageNet, reveals a strong linear correlation between the predicted and actual FDR values derived from MS, DSC, and IDC, with minimum R2 values of 0.94 for MS and 0.90 for DSC and IDC. Furthermore, a low average Root Mean Square Error (RMSE) of 9% between actual and predicted FDR values across all subjects, when relying on regression analysis and MS, demonstrates the latter's superior accuracy when compared to DSC and IDC, with RMSE values of 0.17 and 0.18, respectively. Overall, these results suggest that TEASMA provides a reliable basis for confidently deciding whether to trust test results for DNN models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. K. Pei, Y. Cao, J. Yang, and S. Jana, “Deepxplore: Automated whitebox testing of deep learning systems,” in proceedings of the 26th Symposium on Operating Systems Principles, 2017, pp. 1–18.
  2. L. Ma, F. Juefei-Xu, F. Zhang, J. Sun, M. Xue, B. Li, C. Chen, T. Su, L. Li, Y. Liu, J. Zhao, and Y. Wang, “Deepgauge: Multi-granularity testing criteria for deep learning systems,” 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 120–131, 2018.
  3. Y. Tian, K. Pei, S. Jana, and B. Ray, “Deeptest: Automated testing of deep-neural-network-driven autonomous cars,” in Proceedings of the 40th international conference on software engineering, 2018, pp. 303–314.
  4. Y. Sun, X. Huang, D. Kroening, J. Sharp, M. Hill, and R. Ashmore, “Structural test coverage criteria for deep neural networks,” ACM Transactions on Embedded Computing Systems (TECS), vol. 18, no. 5s, pp. 1–23, 2019.
  5. Z. Yang, J. Shi, M. H. Asyrofi, and D. Lo, “Revisiting neuron coverage metrics and quality of deep neural networks,” in 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER).   IEEE, 2022, pp. 408–419.
  6. Z. Aghababaeyan, M. Abdellatif, L. Briand, M. Bagherzadeh, and R. S., “Black-box testing of deep neural networks through test case diversity,” IEEE Transactions on Software Engineering, arXiv preprint arXiv:2112.12591, 2023.
  7. Z. Li, X. Ma, C. Xu, and C. Cao, “Structural coverage criteria for neural networks could be misleading,” in 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER).   IEEE, 2019, pp. 89–92.
  8. M. Papadakis, D. Shin, S. Yoo, and D.-H. Bae, “Are mutation scores correlated with real fault detection? a large scale empirical study on the relationship between mutants and real faults,” in Proceedings of the 40th International Conference on Software Engineering, 2018, pp. 537–548.
  9. M. Papadakis, M. Kintis, J. Zhang, Y. Jia, Y. Le Traon, and M. Harman, “Mutation testing advances: an analysis and survey,” in Advances in Computers.   Elsevier, 2019, vol. 112, pp. 275–378.
  10. W. Shen, J. Wan, and Z. Chen, “Munn: Mutation analysis of neural networks,” in 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C).   IEEE, 2018, pp. 108–115.
  11. L. Ma, F. Zhang, J. Sun, M. Xue, B. Li, F. Juefei-Xu, C. Xie, L. Li, Y. Liu, J. Zhao et al., “Deepmutation: Mutation testing of deep learning systems,” in 2018 IEEE 29th international symposium on software reliability engineering (ISSRE).   IEEE, 2018, pp. 100–111.
  12. Q. Hu, L. Ma, X. Xie, B. Yu, Y. Liu, and J. Zhao, “Deepmutation++: A mutation testing framework for deep learning systems,” in 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE).   IEEE, 2019, pp. 1158–1161.
  13. N. Humbatova, G. Jahangirova, and P. Tonella, “Deepcrime: mutation testing of deep learning systems based on real faults,” in Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis, 2021, pp. 67–78.
  14. G. Jahangirova and P. Tonella, “An empirical evaluation of mutation operators for deep learning systems,” in 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST).   IEEE, 2020, pp. 74–84.
  15. J. H. Andrews, L. C. Briand, Y. Labiche, and A. S. Namin, “Using mutation analysis for assessing and comparing testing coverage criteria,” IEEE Transactions on Software Engineering, vol. 32, no. 8, pp. 608–624, 2006.
  16. R. Just, D. Jalali, L. Inozemtseva, M. D. Ernst, R. Holmes, and G. Fraser, “Are mutants a valid substitute for real faults in software testing?” in Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, 2014, pp. 654–665.
  17. N. Humbatova, G. Jahangirova, G. Bavota, V. Riccio, A. Stocco, and P. Tonella, “Taxonomy of real faults in deep learning systems,” in Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, 2020, pp. 1110–1121.
  18. M. J. Islam, G. Nguyen, R. Pan, and H. Rajan, “A comprehensive study on deep learning bug characteristics,” in Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2019, pp. 510–520.
  19. Y. Zhang, Y. Chen, S.-C. Cheung, Y. Xiong, and L. Zhang, “An empirical study on tensorflow program bugs,” in Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, 2018, pp. 129–140.
  20. T. J. Hastie and D. Pregibon, “Generalized linear models,” in Statistical models in S.   Routledge, 2017, pp. 195–247.
  21. K. Kelley and K. J. Preacher, “On effect size.” Psychological methods, vol. 17, no. 2, p. 137, 2012.
  22. A. Panichella and C. C. Liem, “What are we really testing in mutation testing for machine learning? a critical reflection,” in 2021 IEEE/ACM 43rd International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER).   IEEE, 2021, pp. 66–70.
  23. J. Kim, R. Feldt, and S. Yoo, “Guiding deep learning system testing using surprise adequacy,” in 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE).   IEEE, 2019, pp. 1039–1049.
  24. Y. Feng, Q. Shi, X. Gao, J. Wan, C. Fang, and Z. Chen, “Deepgini: prioritizing massive tests to enhance the robustness of deep neural networks,” in Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, 2020, pp. 177–188.
  25. H. Fahmy, F. Pastore, M. Bagherzadeh, and L. Briand, “Supporting deep neural network safety analysis and retraining through heatmap-based unsupervised learning,” IEEE Transactions on Reliability, 2021.
  26. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  27. R. J. Campello, D. Moulavi, and J. Sander, “Density-based clustering based on hierarchical density estimates,” in Pacific-Asia conference on knowledge discovery and data mining.   Springer, 2013, pp. 160–172.
  28. S. Niu, Y. Liu, J. Wang, and H. Song, “A decade survey of transfer learning (2010–2020),” IEEE Transactions on Artificial Intelligence, vol. 1, no. 2, pp. 151–166, 2020.
  29. L. Deng, “The mnist database of handwritten digit images for machine learning research [best of the web],” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 141–142, 2012.
  30. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” 2011.
  31. S. Kumar and A. N. Srivistava, “Bootstrap prediction intervals in non-parametric regression with applications to anomaly detection,” in The 18th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, no. ARC-E-DAA-TN6188, 2012.
  32. P. J. Rousseeuw, “Silhouettes: a graphical aid to the interpretation and validation of cluster analysis,” Journal of computational and applied mathematics, vol. 20, pp. 53–65, 1987.
  33. D. Moulavi, P. A. Jaskowiak, R. J. Campello, A. Zimek, and J. Sander, “Density-based clustering validation,” in Proceedings of the 2014 SIAM international conference on data mining.   SIAM, 2014, pp. 839–847.

Summary

We haven't generated a summary for this paper yet.