Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Testing Reactive Systems Using Behavioural Programming, a Model Centric Approach (2112.01538v2)

Published 2 Dec 2021 in cs.SE

Abstract: Testing is a significant aspect of software development. As systems become complex and their use becomes critical to the security and the function of society, the need for testing methodologies that ensure reliability and detect faults as early as possible becomes critical. The most promising approach is the model-based approach where a model is developed that defines how the system is expected to behave and how it is meant to react. The tests are derived from the model and an analysis of the test results is conducted based on it. We will investigate the prospects of using the Behavioral Programming (BP) for a model-based testing (MBT) approach that we will develop. We will develop a natural language for representing the requirements. The model will be fed to algorithms that we will develop. This includes algorithms for the automatic creation of minimal sets of test cases that cover all of the system's requirements, analysing the results of the tests, and other tools that support the testing process. The focus of our methodology will be to find faults caused by the interaction between different requirements in ways that are difficult for the testers to detect. Specifically, we will focus our attention to concurrency issues such as deadlocks and logical race condition. We will use a variety of methods that are made possible by BP, such as non-deterministic execution of scenarios and use of in-code model-checking for building test scenarios and for finding minimal coverage of the test scenarios for the system requirements using Combinatorial Test Design (CTD) methodologies. We will develop a proof-of-concept tool kit which will allow us to demonstrate and evaluate the above mentioned capabilities. We will compare the performance of our tools with the performance of manual testers and of other model-based tools using comparison criteria that we will define and develop.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Yeshayahu Weiss (3 papers)
Citations (2)