Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Maximal and minimal dynamic Petri net slicing (2104.03425v5)

Published 7 Apr 2021 in cs.PL and cs.FL

Abstract: Context: Petri net slicing is a technique to reduce the size of a Petri net to ease the analysis or understanding of the original Petri net. Objective: Presenting two new Petri net slicing algorithms to isolate those places and transitions of a Petri net (the slice) that may contribute tokens to one or more places given (the slicing criterion). Method: The two algorithms proposed are formalized. The maximality of the first algorithm and the minimality of the second algorithm are formally proven. Both algorithms, together with three other state-of-the-art algorithms, have been implemented and integrated into a single tool so that we have been able to carry out a fair empirical evaluation. Results: Besides the two new Petri net slicing algorithms, a public, free, and open-source implementation of five algorithms is reported. The results of an empirical evaluation of the new algorithms and the slices they produce are also presented. Conclusions: The first algorithm collects all places and transitions that may contribute tokens (in any computation) to the slicing criterion, while the second algorithm collects the places and transitions needed to fire the shortest transition sequence that contributes tokens to some place in the slicing criterion. Therefore, the net computed by the first algorithm can reproduce any computation that contributes tokens to any place of interest. In contrast, the second algorithm loses this possibility, but it often produces a much more reduced subnet (which still can reproduce some computations that contribute tokens to some places of interest). The first algorithm is proven maximal, and the second one is proven minimal.

Citations (1)

Summary

We haven't generated a summary for this paper yet.