Papers
Topics
Authors
Recent
2000 character limit reached

Dynamic Program Slicing

Updated 9 January 2026
  • Dynamic program slicing is a technique that isolates the executed statements influencing a variable’s value using a specific slicing criterion.
  • It employs dynamic dependence graphs and backward trace analysis to accurately capture data and control flow dependencies from concrete inputs.
  • Recent advances integrate machine learning, symbolic abstractions, and on-demand re-execution to enhance scalability and precision in diverse programming paradigms.

Dynamic program slicing is a precise program analysis technique that isolates, for a specific execution, exactly the set of executed statements that contributed to a particular variable’s value or control flow at a given statement instance (the slicing criterion). Unlike static slicing—whose slices are over-approximations valid for all inputs—dynamic slicing computes per-run slices that accurately reflect the observed dependencies induced by the concrete input and the actual path of execution. Fundamentally, dynamic slicing relies on dynamic dependence graphs, operational traceability of data and control flow, and criterion-driven backward trace traversal. The field encompasses classical imperative and object-oriented languages, concurrent and aspect-oriented constructs, functional paradigms, and recent integration with machine learning-based tools.

1. Formal Foundations of Dynamic Program Slicing

At its core, dynamic slicing operates with respect to a dynamic slicing criterion, which generally consists of a program location and a variable (or set of variables) at a specific instance during execution. Let PP be a program, run with concrete input II, resulting in an execution trace τ=s1,s2,,sn\tau = \langle s_1, s_2, \ldots, s_n \rangle, where sis_i denotes the iith executed statement instance. The slicing criterion C=(p,V)C=(p, V) identifies program point pp and the set of variables VV. The dynamic dependence graph (DDG) Gd=(N,EdEc)G_d = (N, E_d \cup E_c) has nodes NN corresponding to executed statements, with EdE_d capturing data dependencies (definitions and uses of variables) and EcE_c capturing control dependencies (branching and predicate effects).

The dynamic slice S(C)S(C) is defined as the set of all statement instances qq such that there exists a chain of dynamic dependencies in the DDG from qq to the criterion pp, on some vVv \in V:

S(C)={qthere is a path in Gd from q to p via data/control dependencies on V}S(C) = \{ q \mid \text{there is a path in } G_d \text{ from } q \text{ to } p \text{ via data/control dependencies on } V \}

This formalization is operationalized by building the DDG during execution and performing a backward reachability analysis starting from the criterion node (Sasirekha et al., 2011, Shahandashti et al., 2024, Soremekun et al., 2021).

2. Algorithmic Techniques and Data Structures

Given the execution trace and the DDG, the most common dynamic slicing algorithms follow a backward worklist approach. For each variable vv in the criterion, the algorithm traces back the chain of definitions and control predicates that impact its value.

Dynamic slicing for object-oriented languages employs statically constructed Control Dependency Graphs (CDG) and runtime maintenance of ActiveDataSlice and ActiveControlSlice arrays, with specialized handling for object fields and function overloading (Pani et al., 2010). For aspect-oriented programs, dynamic slicing uses the Aspect System Dependence Graph (AOSG), a directed graph capturing not just control and data dependencies, but also aspect weaving, join-point triggers, advice invocations, and cross-cutting edges, operated on via an edge-marking algorithm during execution (Ray et al., 2014).

For concurrent and constraint-based paradigms, the DDG is extended to include thread-synchronization, inter-agent communication, constraint stores, and non-deterministic choices (Falaschi et al., 2016, Ray et al., 2014, Perera et al., 2016). In functional, imperative, and gradually typed languages, dynamic slicing can be formalized through Galois connections between partial program and partial output lattices, yielding minimal guarantees of sufficiency and correctness for backward slices (Ricciotti et al., 2017, Stolarek et al., 2019, Schwerter et al., 27 Feb 2025).

3. Scalability, Hybrid, and Symbolic Approaches

Scaling dynamic slicing to large codebases faces challenges in excessive instrumentation, trace size, and dependency graph growth. To address these, recent work has introduced focused dynamic slicing using abstract memory models, which represent heap and stack dependencies with symbolic terms rather than concrete references, and instrument only the code-under-study. Havoc summarization is used for external or library code, enabling up to 218×218\times performance improvement with minimal precision loss in large-scale applications (Soifer et al., 2022). Statistical program slicing offers a hybrid between static and dynamic approaches by sampling data-flow dependencies, leveraging hardware-supported control-flow traces (e.g., Intel PT), and augmenting with static must-aliases, achieving 94%\sim94\% recovery of dynamic-slice statements at only 5%5\% performance overhead (Stoica et al., 2021).

On-demand re-execution paradigms trade full-trace storage for repeated, targeted runs that confirm only the necessary frontier dependencies, shifting the computational complexity from execution size to slice size. Empirical results demonstrate speedups of 5×5\times125×125\times with near-linear scaling in slice size when SNS \ll N (Postolski et al., 2022).

4. Extensions to Concurrency, Constraints, and Higher-Order Programs

Concurrent dynamic slicing demands causality-preserving mechanisms, often established via lattice-theoretic frameworks that guarantee slices are invariant under causal interleaving—i.e., all causally equivalent executions yield isomorphic slice lattices. This has been formalized in process calculi (e.g., π\pi-calculus) where forward and backward slice operators form Galois connections, and braid isomorphisms allow slices to be transferred across different execution interleavings (Perera et al., 2016).

In constraint programming (CCP, tcc), dynamic slicing tracks contributions of agents (tell, ask, local, call) to store constraints, supporting per-agent marking and per-constraint backward propagation, and tailoring to timed or stochastic variants (Falaschi et al., 2016).

For functional, imperative, and gradual typing environments, Galois-style formalizations provide minimal slices sufficient to reconstruct a designated partial output, with mechanized proofs of correctness, minimality, and duality between forward and backward slicing (Ricciotti et al., 2017, Stolarek et al., 2019, Schwerter et al., 27 Feb 2025).

5. Complexity, Uniqueness, and Theoretical Limits

The computational complexity of dynamic slicing has critical implications for tool and algorithm design. Verifying whether a candidate slice is a valid dynamic slice can be done in polynomial time (path-faithful variant), whereas permitting path reductions (i.e., allowing alternate branches or loop iterations to be omitted) raises the problem to co-NP and NP-hard for existence (Danicic et al., 2017). Minimal dynamic slices are generally non-unique; the set-inclusion minimal slice can depend on algorithmic choices and the order of statement deletion, and different heuristics may yield different slices for the same execution and criterion.

A plausible implication is that practical slicers must balance precision against efficiency—since guaranteeing the smallest slice is intractable in the general case. Sufficiently precise—though not minimal—slices can be obtained via dependence-graph-based heuristics, approximate dynamic slicing (intersecting static slices with coverage), and hybrid dynamic-static models (Soremekun et al., 2021, Sasirekha et al., 2011).

6. Empirical Evaluation, Impact, and Applications

Empirical studies consistently show that dynamic slices are substantially smaller than static slices—e.g., 21%21\% of code retained versus 33%33\% for static in C fault localization experiments, with improved precision and ranking in fault localization of single faults (Soremekun et al., 2021). Statistical slicing recovers 94%\sim94\% of pure dynamic-slice statements at 5%5\% overhead in production C/C++ code (Stoica et al., 2021). Focused slicing accelerates slice computation without significant loss in precision even for industrial-scale C# projects (Soifer et al., 2022). On-demand re-execution provides 8×8\times125×125\times speedups for algorithmic and parser benchmarks (Postolski et al., 2022).

Applications include debugging, fault localization, program comprehension, regression testing, gradual type error diagnosis, and root-cause analysis for runtime failures (Sasirekha et al., 2011, Schwerter et al., 27 Feb 2025). Slicing is routinely integrated into program analysis pipelines, code review tools, CI regression suites, and IDE support for error diagnosis.

7. Limitations, Machine Learning Integration, and Future Directions

Dynamic slicing is limited to specific runs; each slice is valid only for its input and control path. Instrumentation and trace storage can become prohibitive for long-running or input-intensive programs, though recent symbolic and statistical approaches mitigate these costs. Slices may omit rare but relevant dependencies not exercised in the observation window, and comprehension is bounded by the conciseness and completeness of trace indexing.

LLMs remain at primitive performance for dynamic slicing: the best reported accuracy is only 60%\sim60\% slice statement recall, with no exact-match slices produced and failures dominated by complex control-flow and variable/assignment tracking errors (Shahandashti et al., 2024). Prompt engineering and iterative feedback loops offer incremental (4%)(\sim4\%) gains but have yet to produce reliable slicing suitable for production debugging. Best practice is currently to combine tool-generated context with LLM-based refinement.

Active research aims to scale dynamic slicing to distributed and cloud environments, integrate confidence-weighted and probabilistic slice inference, support multithreading and cross-language ecosystems, and pursue mechanized verification of slicing algorithms for correctness and minimality (Soifer et al., 2022, Stolarek et al., 2019, Ricciotti et al., 2017, Stoica et al., 2021).


In summary, dynamic program slicing presents a rigorously defined, operationally precise approach to tracing value- and control-dependent influences in concrete executions, with robust methodologies spanning imperative, object-oriented, concurrent, aspect-oriented, and functional paradigms. The literature documents substantial advances in scalable algorithms, formal semantics, and practical integration, but ongoing complexity and precision limits—and the constrained performance of LLMs for slicing tasks—remain open challenges for future work.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Dynamic Program Slicing.