Dynamic Multi-Parameter Joint Time-Vertex FRFT
- DMPJFRFT is a transform that assigns adaptive fractional orders per time and graph frequency to capture nonstationary dynamics in signals.
- It maintains key properties such as unitarity, invertibility, and index-additivity, facilitating gradient-based learning and neural network integration.
- Experimental validations show superior denoising and deblurring performance on dynamic graph signals and video data compared to classical transforms.
The Dynamic Multiple-Parameter Joint Time-Vertex Fractional Fourier Transform (DMPJFRFT) generalizes the spectral analysis of dynamic graph signals by introducing time- and frequency-adaptive fractional orders in both spatial (vertex) and temporal dimensions. Unlike classical joint time-vertex transforms that employ a single pair of fractional orders for an entire signal, the DMPJFRFT assigns distinct fractional parameters to each graph frequency and at each time instant or temporal frequency. This yields significantly enhanced spectral modeling capability for data exhibiting temporally-evolving topology or nonstationary dynamics, such as traffic sensor networks, biological networks, and natural video sequences. The transform achieves unitarity, invertibility, and index-additivity, and admits direct gradient-based learning and neural network integration for signal restoration and adaptive filtering (Cui et al., 20 Nov 2025).
1. Mathematical Formulation
Let denote a graph with nodes, and be a dynamic graph signal with time steps. The DMPJFRFT is defined via two multi-parameter modules:
- Type-I Multiple-Parameter Graph FRFT: For each time , a graph spectral fractional order vector is assigned, producing a graph transform at of the form
where diagonalizes the graph shift operator.
- Type-I Multiple-Parameter Discrete FRFT: A vector of temporal fractional orders is defined, yielding
where diagonalizes the temporal shift operator.
The DMPJFRFT operator acts column-wise as
with the -th column of . In vectorized form, this corresponds to
where and .
The inverse transform is given by
assuming invertibility of each component.
Main algebraic properties established include unitarity (energy preservation), index-additivity, and explicit eigenstructure: the transform diagonalizes on the Kronecker product basis with eigenvalues (Cui et al., 20 Nov 2025).
2. Comparison with Prior Joint Fractional Transforms
The original Joint Time-Vertex Fractional Fourier Transform (JFRT) assigns a pair of scalar fractional orders , yielding the operator
which does not permit per-frequency or per-time adaptation (Alikaşifoğlu et al., 2022). While this JFRT maintains unitarity, index-additivity, and supports Tikhonov-regularized denoising, it is limited in spectral selectivity. The DMPJFRFT generalizes the JFRT by introducing parameter matrices and as described above, enabling dynamic and fine-grained control of spectral characteristics, which is leveraged for advanced adaptive filtering (Cui et al., 20 Nov 2025).
3. Discrete Algorithmic Implementation and Computational Complexity
Given eigendecompositions and , the DMPJFRFT is computed as follows (Cui et al., 20 Nov 2025):
Algorithm 1: Compute DMPJFRFT-I-I
- For each time :
- .
- .
- .
- .
- .
- Apply to , then to the result.
Complexity:
- Eigendecomposition: (one-time cost).
- Each forward or inverse transform: two matrix-vector products of size , yielding per signal, with implying (Cui et al., 20 Nov 2025).
- Compared to joint grid search over scalar fractional orders (cost ), DMPJFRFT-based gradient or neural learning requires only per training epochs (Yan et al., 29 Jul 2025).
4. Intelligent Filtering and Learning Methods
DMPJFRFT enables two advanced intelligent filtering paradigms:
Gradient-Descent Filtering:
A parametric spectral filter is applied in the DMPJFRFT domain. All parameters are optimized jointly to minimize the loss
for noisy input and clean . Gradients for , , are computed and updated via standard optimizers, exploiting the closed-form differentiability of the DMPJFRFT (Cui et al., 20 Nov 2025).
Neural Network Integration (DMPJFRFTNet):
The DMPJFRFT is embedded as a trainable layer, with all spectral parameters , , and updated end-to-end with task-specific loss (e.g., MSE or PSNR/SSIM surrogate). The inference architecture is:
- Forward:
- Filtering:
- Inverse:
No clean is required for inference; all learning is performed during training (Cui et al., 20 Nov 2025).
5. Experimental Validation and Results
DMPJFRFT was evaluated on real-world dynamic graph signals (PEMSD7(M), PEMS08, SST temperature series) and video restoration tasks (REDS, GoPro) (Cui et al., 20 Nov 2025). Corruption included additive Gaussian noise –120 and blur kernels on video.
Key results:
- Real Graph Denoising (SNR): DMPJFRFT-I-I achieved 17.5–20.6 dB SNR at –100, outperforming JFRFT (11.2–17.3 dB) and 2D GFRFT (5.6–16.1 dB).
- Video Denoising (REDS, ): DMPJFRFT-I-II yielded MSE of and PSNR 100.93 dB, bettering the JFRFT and graph-only baselines.
- Video Deblurring: DMPJFRFT improvements are consistent across MSE, PSNR, and SSIM, with PSNR gains up to 20–40 dB.
- Comparison with GNNs: DMPJFRFTNet exceeded all tested GNN variants in SNR and PSNR on graph and video data. E.g., on GoPro (video, ) DMPJFRFT-I-INet achieved MSE 72.9, PSNR 29.64 dB, and SSIM 0.797, compared to JFRFTNet (MSE 97.4, PSNR 28.97, SSIM 0.752) and BernNet (MSE 148.5, PSNR 26.52, SSIM 0.745).
Qualitative insights: The per-time fractional orders and per-frequency orders adapt to local signal complexity, supporting sharper edge and texture recovery (Cui et al., 20 Nov 2025).
6. Theoretical Properties and Significance
DMPJFRFT generalizes the JFRT by permitting parameter adaptivity at fine spectral resolution. Main theoretical guarantees:
- Invertibility and Unitarity: DMPJFRFT maintains transform invertibility and energy preservation under normal (unitary diagonalizable) graph and temporal shifts.
- Index-Additivity: The transform is closed under addition of parameter sequences, extending the classical group property of the FRFT.
- Eigenstructure: The eigenbasis remains the Kronecker product of graph and temporal eigenvectors, with eigenvalues modulated by their adaptive exponents.
This flexibility enables robust denoising, deblurring, and adaptive restoration for nonstationary and temporally inhomogeneous signals (Cui et al., 20 Nov 2025). A plausible implication is that DMPJFRFT provides a fundamentally improved foundation for dynamic graph signal processing over previously available joint transforms.
7. Context and Extensions
DMPJFRFT builds on the joint time-vertex transform lineage (Alikaşifoğlu et al., 2022), extending its capacity through dynamic, many-parameter exponents. Earlier trainable and differentiable forms, such as the hyper-differential JFRT (with scalar parameters learnable via gradient descent or neural networks), are strictly contained as special cases of DMPJFRFT (Yan et al., 29 Jul 2025). The approach is compatible with, and can subsume, blockwise dynamic graph models and patch-based video processing, and is well-suited for end-to-end learning in parameter-rich models for large-scale, high-dimensional, dynamic, and non-Euclidean data.
References:
- (Alikaşifoğlu et al., 2022) "Joint Time-Vertex Fractional Fourier Transform"
- (Yan et al., 29 Jul 2025) "Trainable Joint Time-Vertex Fractional Fourier Transform"
- (Cui et al., 20 Nov 2025) "Dynamic Multiple-Parameter Joint Time-Vertex Fractional Fourier Transform and its Intelligent Filtering Methods"