Investigate practical scalability of CCwL* redundancy elimination methods

Investigate the practical scalability of the redundancy elimination techniques in the contextual componentwise learning algorithm CCwL*, including rigorous evaluation on large real-world compositional systems with known network structures and assessment of performance impacts from large alphabets and context analysis costs.

Background

The authors introduce CCwL*, an algorithm that performs context analysis to prune component redundancies in Moore machine networks, and report promising experimental results on benchmarks such as MQTT and BinaryCounter. They discuss cost models and trade-offs in context analysis parameters (component abstraction and reachability bounds), noting classes of systems where coarse/fine-grained analysis is beneficial.

They explicitly state that the practical scalability of their redundancy elimination methods has not yet been investigated in depth, highlighting the need for large, real-world benchmarks and potential techniques to handle large alphabets (e.g., symbolic abstractions). This investigation would establish applicability limits, guide parameter choices, and inform further algorithmic improvements.

References

Summarizing the above discussions along RQ1--4, we conclude that 1) we are yet to investigate in-depth the practical scalability of our redundancy elimination methods, but 2) with the experimental results that show the efficiency of CCwL* for several benchmarks, the current work definitely opens promising avenues for future research.

Componentwise Automata Learning for System Integration (Extended Version) (2508.04458 - Fujinami et al., 6 Aug 2025) in Implementation and Experiments, Results and Discussions (RQ4)