Papers
Topics
Authors
Recent
2000 character limit reached

Dynamic Multi-Parameter Joint Time-Vertex FRFT

Updated 27 November 2025
  • DMPJFRFT is a transform that assigns adaptive fractional orders per time and graph frequency to capture nonstationary dynamics in signals.
  • It maintains key properties such as unitarity, invertibility, and index-additivity, facilitating gradient-based learning and neural network integration.
  • Experimental validations show superior denoising and deblurring performance on dynamic graph signals and video data compared to classical transforms.

The Dynamic Multiple-Parameter Joint Time-Vertex Fractional Fourier Transform (DMPJFRFT) generalizes the spectral analysis of dynamic graph signals by introducing time- and frequency-adaptive fractional orders in both spatial (vertex) and temporal dimensions. Unlike classical joint time-vertex transforms that employ a single pair of fractional orders for an entire signal, the DMPJFRFT assigns distinct fractional parameters to each graph frequency and at each time instant or temporal frequency. This yields significantly enhanced spectral modeling capability for data exhibiting temporally-evolving topology or nonstationary dynamics, such as traffic sensor networks, biological networks, and natural video sequences. The transform achieves unitarity, invertibility, and index-additivity, and admits direct gradient-based learning and neural network integration for signal restoration and adaptive filtering (Cui et al., 20 Nov 2025).

1. Mathematical Formulation

Let G=(N,E,A)G = (N, E, A) denote a graph with NN nodes, and XCN×TX \in \mathbb{C}^{N \times T} be a dynamic graph signal with TT time steps. The DMPJFRFT is defined via two multi-parameter modules:

  • Type-I Multiple-Parameter Graph FRFT: For each time tt, a graph spectral fractional order vector a(t)RNa^{(t)} \in \mathbb{R}^N is assigned, producing a graph transform at tt of the form

FGa(t)=VG diag(λ0a0(t),...,λN1aN1(t)) VG1F_{G}^{a^{(t)}} = V_G \ \operatorname{diag}(\lambda_0^{a_0^{(t)}}, ..., \lambda_{N-1}^{a_{N-1}^{(t)}}) \ V_G^{-1}

where VGV_G diagonalizes the graph shift operator.

  • Type-I Multiple-Parameter Discrete FRFT: A vector bRTb \in \mathbb{R}^T of temporal fractional orders is defined, yielding

Db=VT diag(μ0b0,...,μT1bT1) VT1D^{b} = V_T \ \operatorname{diag}(\mu_0^{b_0}, ..., \mu_{T-1}^{b_{T-1}}) \ V_T^{-1}

where VTV_T diagonalizes the temporal shift operator.

The DMPJFRFT operator acts column-wise as

Y=[FGa(1)x1, , FGa(T)xT](Db)TY = \left[ F_G^{a^{(1)}} x_1,\ \ldots,\ F_G^{a^{(T)}} x_T \right] \cdot (D^b)^T

with xtx_t the tt-th column of XX. In vectorized form, this corresponds to

y=FJ(A,b) xy = F_J(A, b)\ x

where FJ(A,b)=(DbIN)blkdiag(FGa(1),...,FGa(T))F_J(A, b) = (D^b \otimes I_N) \, \operatorname{blkdiag}(F_G^{a^{(1)}}, ..., F_G^{a^{(T)}}) and A=[a(1),...,a(T)]RN×TA = [a^{(1)}, ..., a^{(T)}] \in \mathbb{R}^{N \times T}.

The inverse transform is given by

X=FJ(A,b) yX = F_J(-A, -b)\ y

assuming invertibility of each component.

Main algebraic properties established include unitarity (energy preservation), index-additivity, and explicit eigenstructure: the transform diagonalizes on the Kronecker product basis vtTvjGv_t^T \otimes v_j^G with eigenvalues μtbtλjaj(t)\mu_t^{b_t} \lambda_j^{a_j^{(t)}} (Cui et al., 20 Nov 2025).

2. Comparison with Prior Joint Fractional Transforms

The original Joint Time-Vertex Fractional Fourier Transform (JFRT) assigns a pair of scalar fractional orders (αt,αv)(\alpha_t, \alpha_v), yielding the operator

Fαt,αv{X}=FGαvX(Fαt)T\mathcal{F}_{\alpha_t, \alpha_v}\{X\} = F_G^{\alpha_v} X (F^{\alpha_t})^T

which does not permit per-frequency or per-time adaptation (Alikaşifoğlu et al., 2022). While this JFRT maintains unitarity, index-additivity, and supports Tikhonov-regularized denoising, it is limited in spectral selectivity. The DMPJFRFT generalizes the JFRT by introducing parameter matrices AA and bb as described above, enabling dynamic and fine-grained control of spectral characteristics, which is leveraged for advanced adaptive filtering (Cui et al., 20 Nov 2025).

3. Discrete Algorithmic Implementation and Computational Complexity

Given eigendecompositions Z=VGΛGVG1Z = V_G \Lambda_G V_G^{-1} and DT=VTΛTVT1D_T = V_T \Lambda_T V_T^{-1}, the DMPJFRFT is computed as follows (Cui et al., 20 Nov 2025):

Algorithm 1: Compute DMPJFRFT-I-I

  • For each time t=1,...,Tt = 1, ..., T:
    • LG(t)diag(λ0a0(t),...,λN1aN1(t))L_G^{(t)} \gets \operatorname{diag} (\lambda_0^{a_0^{(t)}}, ..., \lambda_{N-1}^{a_{N-1}^{(t)}}).
    • FG(t)VGLG(t)VG1F_G^{(t)} \gets V_G L_G^{(t)} V_G^{-1}.
  • Fblkblkdiag(FG(1),...,FG(T))F_{\text{blk}} \gets \operatorname{blkdiag}(F_G^{(1)}, ..., F_G^{(T)}).
  • LTdiag(μ0b0,...,μT1bT1)L_T \gets \operatorname{diag}(\mu_0^{b_0}, ..., \mu_{T-1}^{b_{T-1}}).
  • DbVTLTVT1D_b \gets V_T L_T V_T^{-1}.
  • Apply FblkF_{\text{blk}} to vec(X)\operatorname{vec}(X), then (DbIN)(D_b \otimes I_N) to the result.

Complexity:

  • Eigendecomposition: O(N3+T3)\mathcal{O}(N^3 + T^3) (one-time cost).
  • Each forward or inverse transform: two matrix-vector products of size NT×NTNT \times NT, yielding O(N2T+NT2)\mathcal{O}(N^2 T + N T^2) per signal, with NTN \approx T implying O(N3)\mathcal{O}(N^3) (Cui et al., 20 Nov 2025).
  • Compared to joint grid search over scalar fractional orders (cost O(SϕSψN4T4)\mathcal{O}(S_\phi S_\psi N^4 T^4)), DMPJFRFT-based gradient or neural learning requires only O(EN2T2)\mathcal{O}(E N^2 T^2) per EE training epochs (Yan et al., 29 Jul 2025).

4. Intelligent Filtering and Learning Methods

DMPJFRFT enables two advanced intelligent filtering paradigms:

Gradient-Descent Filtering:

A parametric spectral filter H=diag(h)H = \operatorname{diag}(h) is applied in the DMPJFRFT domain. All parameters (A,b,h)(A, b, h) are optimized jointly to minimize the loss

L(A,b,H)=X^X22,X^=FJ(A,b)HFJ(A,b)YL(A, b, H) = \lVert \hat{X} - X \rVert_2^2, \quad \hat{X} = F_J(-A, -b) H F_J(A, b) Y

for noisy input YY and clean XX. Gradients for AA, bb, HH are computed and updated via standard optimizers, exploiting the closed-form differentiability of the DMPJFRFT (Cui et al., 20 Nov 2025).

Neural Network Integration (DMPJFRFTNet):

The DMPJFRFT is embedded as a trainable layer, with all spectral parameters AA, bb, and HH updated end-to-end with task-specific loss (e.g., MSE or PSNR/SSIM surrogate). The inference architecture is:

  • Forward: Yspectral=FJ(A,b)(Ynoisy)Y_{\text{spectral}} = F_J(A, b)(Y_{\text{noisy}})
  • Filtering: Z=HYspectralZ = H \circ Y_{\text{spectral}}
  • Inverse: X^=FJ(A,b)(Z)\hat{X} = F_J(-A, -b)(Z)

No clean XX is required for inference; all learning is performed during training (Cui et al., 20 Nov 2025).

5. Experimental Validation and Results

DMPJFRFT was evaluated on real-world dynamic graph signals (PEMSD7(M), PEMS08, SST temperature series) and video restoration tasks (REDS, GoPro) (Cui et al., 20 Nov 2025). Corruption included additive Gaussian noise σ=15\sigma=15–120 and blur kernels on video.

Key results:

  • Real Graph Denoising (SNR): DMPJFRFT-I-I achieved 17.5–20.6 dB SNR at σ=40\sigma=40–100, outperforming JFRFT (11.2–17.3 dB) and 2D GFRFT (5.6–16.1 dB).
  • Video Denoising (REDS, σ=45\sigma=45): DMPJFRFT-I-II yielded MSE of 5.251065.25\cdot 10^{-6} and PSNR 100.93 dB, bettering the JFRFT and graph-only baselines.
  • Video Deblurring: DMPJFRFT improvements are consistent across MSE, PSNR, and SSIM, with PSNR gains up to 20–40 dB.
  • Comparison with GNNs: DMPJFRFTNet exceeded all tested GNN variants in SNR and PSNR on graph and video data. E.g., on GoPro (video, σ=45\sigma=45) DMPJFRFT-I-INet achieved MSE 72.9, PSNR 29.64 dB, and SSIM 0.797, compared to JFRFTNet (MSE 97.4, PSNR 28.97, SSIM 0.752) and BernNet (MSE 148.5, PSNR 26.52, SSIM 0.745).

Qualitative insights: The per-time fractional orders a(t)a^{(t)} and per-frequency orders bb adapt to local signal complexity, supporting sharper edge and texture recovery (Cui et al., 20 Nov 2025).

6. Theoretical Properties and Significance

DMPJFRFT generalizes the JFRT by permitting parameter adaptivity at fine spectral resolution. Main theoretical guarantees:

  • Invertibility and Unitarity: DMPJFRFT maintains transform invertibility and energy preservation under normal (unitary diagonalizable) graph and temporal shifts.
  • Index-Additivity: The transform is closed under addition of parameter sequences, extending the classical group property of the FRFT.
  • Eigenstructure: The eigenbasis remains the Kronecker product of graph and temporal eigenvectors, with eigenvalues modulated by their adaptive exponents.

This flexibility enables robust denoising, deblurring, and adaptive restoration for nonstationary and temporally inhomogeneous signals (Cui et al., 20 Nov 2025). A plausible implication is that DMPJFRFT provides a fundamentally improved foundation for dynamic graph signal processing over previously available joint transforms.

7. Context and Extensions

DMPJFRFT builds on the joint time-vertex transform lineage (Alikaşifoğlu et al., 2022), extending its capacity through dynamic, many-parameter exponents. Earlier trainable and differentiable forms, such as the hyper-differential JFRT (with scalar parameters learnable via gradient descent or neural networks), are strictly contained as special cases of DMPJFRFT (Yan et al., 29 Jul 2025). The approach is compatible with, and can subsume, blockwise dynamic graph models and patch-based video processing, and is well-suited for end-to-end learning in parameter-rich models for large-scale, high-dimensional, dynamic, and non-Euclidean data.

References:

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Dynamic Multiple-Parameter Joint Time-Vertex Fractional Fourier Transform (DMPJFRFT).