Papers
Topics
Authors
Recent
Search
2000 character limit reached

Dynamic Dependence Graph (DDG)

Updated 24 February 2026
  • DDG is a formalism that represents evolving, context-sensitive dependencies via dynamic adjacency matrices and data-driven mechanisms.
  • In time-series modeling, it updates inter-variable relationships in GNNs to enhance forecasting accuracy with adaptive graph learning.
  • For LLM auditing, DDG employs attention-weighted, directed graphs to track decision provenance and enable real-time security anomaly detection.

A Dynamic Dependence Graph (DDG) is a formalism for representing dependencies that evolve across contexts or time, with instantiations in both neural time-series modeling and LLM-driven agent auditing. A DDG can denote either adaptive inter-variable relationships within multivariate data (via dynamically updated adjacency matrices for graph neural networks), or the real-time provenance of LLM agent planning decisions (as an attention-weighted, directed graph over contextual concepts). In both domains, DDGs encode non-static, context-sensitive dependency structure via learnable or data-driven mechanisms, supporting tasks such as interpretable forecasting, adaptive modeling, and security auditing.

1. Formal Structure and Mathematical Representation

A Dynamic Dependence Graph (DDG) is defined as a time-varying or context-dependent directed graph G=(V,E,w)\mathcal{G} = (\mathcal{V},\mathcal{E},w), where V\mathcal{V} is a set of vertices, E\mathcal{E} a set of directed edges (possibly time-indexed), and ww assigns a nonnegative scalar weight to each edge, quantifying the (possibly probabilistic) influence from source to target.

Two canonical instantiations are:

  • In Multivariate Time Series Modeling via GNNs: G(t)\mathcal{G}^{(t)} is parameterized by a dynamic adjacency matrix A(t)RN×NA^{(t)} \in \mathbb{R}^{N \times N}, updated at each time step to reflect latent inter-series dependencies, where NN is the number of variables or nodes (Sriramulu et al., 2023).
  • In LLM Agent Decision Tracking: V\mathcal{V} comprises logical concepts (e.g., user query, tool descriptions, prior results, call decisions), with edge weights w(vs,vt)w(v_s, v_t) determined by aggregated LLM attention, tracing the provenance of final agent actions (Wang et al., 28 Aug 2025).

The structural formation, update rules, and edge semantics are domain-specific, but the unifying property is context- or data-dependent, non-static edge relationships.

2. Construction Methodologies

a. Time-Series Adaptive Graph Learning

The DDG methodology for GNN-based forecasting unfolds as follows (Sriramulu et al., 2023):

  1. Static Skeleton: Construct a sparse initial adjacency A{0,1}N×NA \in \{0,1\}^{N \times N} using the elementwise maximum over several statistical estimators (Pearson correlation, Granger causality, graphical Lasso, etc.), followed by thresholding via a binary mask ABiA_{Bi}:

A(i,j)=maxxAx(i,j).A(i,j) = \max_{x} A_x(i,j).

  1. Dynamic Correction: At each timestep tt, compute a dynamic correction ΔA(t)\Delta A^{(t)} by a causal convolutional self-attention mechanism over the recent data window. Mask this by ABiA_{Bi} for tractability.
  2. Dynamic Adjacency: The current DDG is then

A(t)=A+ΔA(t).A^{(t)} = A + \Delta A^{(t)}.

  1. Propagation: A(t)A^{(t)} serves as the adjacency for graph convolutions applied to the temporally-windowed data, with parameters updated end-to-end via backpropagation against a mean-squared error loss.

This pipeline yields a dependence graph that adapts as the sequential data evolves, refining edge strengths to reflect shifting interdependencies.

b. LLM Provenance and Decision Analysis

In decision provenance for LLM agents (Wang et al., 28 Aug 2025):

  1. Vertices: V=VuVTVRVc\mathcal{V} = V_u \cup V_T \cup V_R \cup V_c, corresponding to user query, tool descriptions, prior results, and call-decision components.
  2. Edges and Weights: For each source-target pair (vsvt)(v_s \to v_t), compute weights by aggregating squared, filtered attention scores (Total Attention Energy):

w(vs,vt)=itokens(vs)jtokens(vt)Aj,i2,w(v_s, v_t) = \sum_{i \in \mathrm{tokens}(v_s)} \sum_{j \in \mathrm{tokens}(v_t)} A_{j,i}^2,

where Aj,iA_{j,i} is the filtered per-layer LLM attention (post attention-sink filtering and entropy normalization).

  1. Anomaly/Audit Algorithms: Subgraphs and edge weight anomalies serve as signals for influence auditing, especially in security applications.

This instance produces a context-specific, interpretable provenance graph exposing distributed, probabilistic dependencies flowing into action selection.

3. Statistical and Neural Mechanisms for Dependency Estimation

The construction of DDG for time-series graphs hinges on ensemble statistical structure learning. The static component AA is built by aggregating N×NN \times N adjacency matrices derived from the following:

Method Dependency Type Matrix Output
Pearson Correlation Linear association ACM(i,j)=ρ(Y:,i,Y:,j)A_{CM}(i,j) = |\rho(Y_{:,i}, Y_{:,j})|
Granger Causality Predictive causality AGC(i,j)=lneieijA_{GC}(i,j) = \ln \frac{e_i}{e_{ij}}
Graphical Lasso Conditional Gaussian dependencies AGL(i,j)=ΘijA_{GL}(i,j) = |\Theta_{ij}|
Mutual Information Nonlinear dependence AMI(i,j)=I(Y:,i;Y:,j)A_{MI}(i,j) = I(Y_{:,i}; Y_{:,j})
Transfer Entropy Directed temporal information transfer ATE(i,j)=TE(ij)A_{TE}(i, j) = TE(i \rightarrow j)

The neural dynamic component ΔA(t)\Delta A^{(t)} is computed as masked convolutional self-attention, with masking enforced by ABiA_{Bi}. The final A(t)A^{(t)} encodes both domain knowledge and data-adaptive corrections in a fully differentiable pipeline.

4. Applications in Time-Series Forecasting and LLM Tool Security

a. Multivariate Forecasting

ADLGNN (Adaptive Dependency Learning Graph Neural Network) demonstrates that DDG-based architectures can realize superior multivariate forecasting in domains without natural graph priors (traffic, electricity, solar). The DDG enables context-sensitive diffusion of node representations during temporal and spatial propagation, supporting causal forecasting and model interpretability. Empirically, ADLGNN achieves the lowest root relative squared error (RSE) across multiple datasets, outperforming strong baselines (e.g., MTGNN) and static-graph ablations (Sriramulu et al., 2023).

b. Real-Time LLM Decision Guardrails

In MCP-compliant agent infrastructure, DDG enables provenance tracking to detect and attribute tool poisoning attacks (TPA). MindGuard uses DDG to quantify abnormal influence transfer from uninvoked, potentially-poisoned tools to critical call-decision nodes, operationalized via the Anomaly Influence Ratio (AIR):

αs,t=w(vs,vt)w(vu,vt)+w(vsc,vt)\alpha_{s,t} = \frac{w(v_s, v_t)}{w(v_u, v_t) + w(v_s^c, v_t)}

The graph structure allows precise, real-time detection (average precision 94–99%), attribution (accuracy 95–100%), and operates with subsecond latency and no additional token overhead (Wang et al., 28 Aug 2025).

5. Relationships to Classical Dependence Models

DDG in LLM auditing represents a probabilistic, attention-weighted analogue of the Program Dependence Graph (PDG) used in static program analysis. While PDG vertices denote program statements and edges encode control/data dependencies, DDG vertices correspond to semantic concepts, with edges expressing soft, distributed attention-based influence. This yields a decomposition into control-flow (context \to tool choice) and data-flow (context \to arguments) subgraphs, permitting secure information flow policies (CFI, DFI) at the decision level (Wang et al., 28 Aug 2025).

In adaptive GNNs, DDG serves as a hybrid of data-driven and learned connectivity, supplementing static graphical models with dynamically adjusted edge strengths as inferred via neural attention.

6. Computational Aspects and Empirical Performance

a. DDG for Time-Series Graph Modeling

  • Dynamic block complexity: O(NS)O(N S) per time step (where NN is node count, SS is neighbor sparsity).
  • Full model: Five spatio-temporal blocks, each with parallel MixHop graph convolution and dilated temporal convolutions.
  • Training: End-to-end with MSE-based objective and L2L_2 regularization. No explicit loss term for the adjacency matrix; the graph learning is implicit and differentiable.
  • Datasets: Strong performance across solar, electricity, and traffic datasets, with average RSE improvement of 3.4% over previous best GNNs, and training times comparable or superior at scale (Sriramulu et al., 2023).

b. DDG for LLM Security Auditing

  • Construction: Attention filtering, partition, and weighted total-energy aggregation in O(N2)O(N^2) per layer; typically finishes under 1 second.
  • Overhead: ≤5% latency; zero token cost.
  • Robustness: Threshold parameters (τ\tau in AIR) enable flexible TPR/FPR trade-offs; achieves high TPR (≥90%) at low FPR (≤1%) even on challenging benchmarks.
  • Practicality: No need for model modification or secondary inference calls (Wang et al., 28 Aug 2025).

7. Illustrative Example: LLM Tool Call Provenance

Given an LLM context in which the user requests directory creation, but a poisoned tool is injected (“ReadFile”), DDG vertices correspond to the query, both tool descriptions, and the ultimate call-decision nodes. In case of explicit-invocation hijack, edge weights reflect a dominant attention path from the poisoned tool vertex to the call-decision node, with a computed AIR exceeding the anomaly threshold, allowing real-time detection and attribution to the malicious tool. This demonstrates the interpretability and operational effectiveness of DDG-based security auditing (Wang et al., 28 Aug 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Dynamic Dependence Graph (DDG).