Dynamic Dependence Graph (DDG)
- DDG is a formalism that represents evolving, context-sensitive dependencies via dynamic adjacency matrices and data-driven mechanisms.
- In time-series modeling, it updates inter-variable relationships in GNNs to enhance forecasting accuracy with adaptive graph learning.
- For LLM auditing, DDG employs attention-weighted, directed graphs to track decision provenance and enable real-time security anomaly detection.
A Dynamic Dependence Graph (DDG) is a formalism for representing dependencies that evolve across contexts or time, with instantiations in both neural time-series modeling and LLM-driven agent auditing. A DDG can denote either adaptive inter-variable relationships within multivariate data (via dynamically updated adjacency matrices for graph neural networks), or the real-time provenance of LLM agent planning decisions (as an attention-weighted, directed graph over contextual concepts). In both domains, DDGs encode non-static, context-sensitive dependency structure via learnable or data-driven mechanisms, supporting tasks such as interpretable forecasting, adaptive modeling, and security auditing.
1. Formal Structure and Mathematical Representation
A Dynamic Dependence Graph (DDG) is defined as a time-varying or context-dependent directed graph , where is a set of vertices, a set of directed edges (possibly time-indexed), and assigns a nonnegative scalar weight to each edge, quantifying the (possibly probabilistic) influence from source to target.
Two canonical instantiations are:
- In Multivariate Time Series Modeling via GNNs: is parameterized by a dynamic adjacency matrix , updated at each time step to reflect latent inter-series dependencies, where is the number of variables or nodes (Sriramulu et al., 2023).
- In LLM Agent Decision Tracking: comprises logical concepts (e.g., user query, tool descriptions, prior results, call decisions), with edge weights determined by aggregated LLM attention, tracing the provenance of final agent actions (Wang et al., 28 Aug 2025).
The structural formation, update rules, and edge semantics are domain-specific, but the unifying property is context- or data-dependent, non-static edge relationships.
2. Construction Methodologies
a. Time-Series Adaptive Graph Learning
The DDG methodology for GNN-based forecasting unfolds as follows (Sriramulu et al., 2023):
- Static Skeleton: Construct a sparse initial adjacency using the elementwise maximum over several statistical estimators (Pearson correlation, Granger causality, graphical Lasso, etc.), followed by thresholding via a binary mask :
- Dynamic Correction: At each timestep , compute a dynamic correction by a causal convolutional self-attention mechanism over the recent data window. Mask this by for tractability.
- Dynamic Adjacency: The current DDG is then
- Propagation: serves as the adjacency for graph convolutions applied to the temporally-windowed data, with parameters updated end-to-end via backpropagation against a mean-squared error loss.
This pipeline yields a dependence graph that adapts as the sequential data evolves, refining edge strengths to reflect shifting interdependencies.
b. LLM Provenance and Decision Analysis
In decision provenance for LLM agents (Wang et al., 28 Aug 2025):
- Vertices: , corresponding to user query, tool descriptions, prior results, and call-decision components.
- Edges and Weights: For each source-target pair , compute weights by aggregating squared, filtered attention scores (Total Attention Energy):
where is the filtered per-layer LLM attention (post attention-sink filtering and entropy normalization).
- Anomaly/Audit Algorithms: Subgraphs and edge weight anomalies serve as signals for influence auditing, especially in security applications.
This instance produces a context-specific, interpretable provenance graph exposing distributed, probabilistic dependencies flowing into action selection.
3. Statistical and Neural Mechanisms for Dependency Estimation
The construction of DDG for time-series graphs hinges on ensemble statistical structure learning. The static component is built by aggregating adjacency matrices derived from the following:
| Method | Dependency Type | Matrix Output |
|---|---|---|
| Pearson Correlation | Linear association | |
| Granger Causality | Predictive causality | |
| Graphical Lasso | Conditional Gaussian dependencies | |
| Mutual Information | Nonlinear dependence | |
| Transfer Entropy | Directed temporal information transfer |
The neural dynamic component is computed as masked convolutional self-attention, with masking enforced by . The final encodes both domain knowledge and data-adaptive corrections in a fully differentiable pipeline.
4. Applications in Time-Series Forecasting and LLM Tool Security
a. Multivariate Forecasting
ADLGNN (Adaptive Dependency Learning Graph Neural Network) demonstrates that DDG-based architectures can realize superior multivariate forecasting in domains without natural graph priors (traffic, electricity, solar). The DDG enables context-sensitive diffusion of node representations during temporal and spatial propagation, supporting causal forecasting and model interpretability. Empirically, ADLGNN achieves the lowest root relative squared error (RSE) across multiple datasets, outperforming strong baselines (e.g., MTGNN) and static-graph ablations (Sriramulu et al., 2023).
b. Real-Time LLM Decision Guardrails
In MCP-compliant agent infrastructure, DDG enables provenance tracking to detect and attribute tool poisoning attacks (TPA). MindGuard uses DDG to quantify abnormal influence transfer from uninvoked, potentially-poisoned tools to critical call-decision nodes, operationalized via the Anomaly Influence Ratio (AIR):
The graph structure allows precise, real-time detection (average precision 94–99%), attribution (accuracy 95–100%), and operates with subsecond latency and no additional token overhead (Wang et al., 28 Aug 2025).
5. Relationships to Classical Dependence Models
DDG in LLM auditing represents a probabilistic, attention-weighted analogue of the Program Dependence Graph (PDG) used in static program analysis. While PDG vertices denote program statements and edges encode control/data dependencies, DDG vertices correspond to semantic concepts, with edges expressing soft, distributed attention-based influence. This yields a decomposition into control-flow (context tool choice) and data-flow (context arguments) subgraphs, permitting secure information flow policies (CFI, DFI) at the decision level (Wang et al., 28 Aug 2025).
In adaptive GNNs, DDG serves as a hybrid of data-driven and learned connectivity, supplementing static graphical models with dynamically adjusted edge strengths as inferred via neural attention.
6. Computational Aspects and Empirical Performance
a. DDG for Time-Series Graph Modeling
- Dynamic block complexity: per time step (where is node count, is neighbor sparsity).
- Full model: Five spatio-temporal blocks, each with parallel MixHop graph convolution and dilated temporal convolutions.
- Training: End-to-end with MSE-based objective and regularization. No explicit loss term for the adjacency matrix; the graph learning is implicit and differentiable.
- Datasets: Strong performance across solar, electricity, and traffic datasets, with average RSE improvement of 3.4% over previous best GNNs, and training times comparable or superior at scale (Sriramulu et al., 2023).
b. DDG for LLM Security Auditing
- Construction: Attention filtering, partition, and weighted total-energy aggregation in per layer; typically finishes under 1 second.
- Overhead: ≤5% latency; zero token cost.
- Robustness: Threshold parameters ( in AIR) enable flexible TPR/FPR trade-offs; achieves high TPR (≥90%) at low FPR (≤1%) even on challenging benchmarks.
- Practicality: No need for model modification or secondary inference calls (Wang et al., 28 Aug 2025).
7. Illustrative Example: LLM Tool Call Provenance
Given an LLM context in which the user requests directory creation, but a poisoned tool is injected (“ReadFile”), DDG vertices correspond to the query, both tool descriptions, and the ultimate call-decision nodes. In case of explicit-invocation hijack, edge weights reflect a dominant attention path from the poisoned tool vertex to the call-decision node, with a computed AIR exceeding the anomaly threshold, allowing real-time detection and attribution to the malicious tool. This demonstrates the interpretability and operational effectiveness of DDG-based security auditing (Wang et al., 28 Aug 2025).