Re-evaluation of Logical Specification in Behavioural Verification (2505.17979v1)
Abstract: This study empirically validates automated logical specification methods for behavioural models, focusing on their robustness, scalability, and reproducibility. By the systematic reproduction and extension of prior results, we confirm key trends, while identifying performance irregularities that suggest the need for adaptive heuristics in automated reasoning. Our findings highlight that theorem provers exhibit varying efficiency across problem structures, with implications for real-time verification in CI/CD pipelines and AI-driven IDEs supporting on-the-fly validation. Addressing these inefficiencies through self-optimising solvers could enhance the stability of automated reasoning, particularly in safety-critical software verification.