Papers
Topics
Authors
Recent
2000 character limit reached

Graph Fractional Fourier Transform (GFRFT)

Updated 8 December 2025
  • Graph Fractional Fourier Transform (GFRFT) is a generalization of the graph Fourier transform that incorporates a continuous fractional parameter for flexible spectral analysis.
  • It interpolates between vertex and spectral domains, offering tunable trade-offs that enhance denoising, adaptive filtering, and multi-dimensional signal processing.
  • The framework supports adaptive order learning and scalable algorithms, improving sparse representation and overall performance in graph signal processing.

The Graph Fractional Fourier Transform (GFRFT) is a generalization of the classical graph Fourier transform, introducing a continuous fractional-order parameter that enables flexible spectral analysis of signals defined on graphs. By interpolating between the graph vertex and spectral domains, the GFRFT offers tunable trade-offs between locality and frequency resolution, and serves as a foundation for a spectrum of advanced transforms, including multi-dimensional, multi-parameter, time-vertex, and angular extensions. Its utility spans sparse representation, adaptive filtering, joint time-vertex analysis, denoising of dynamic graph signals, and parameter learning frameworks, thus expanding the capabilities of graph signal processing (GSP) in both theoretical rigor and practical expressiveness.

1. Mathematical Definition and Operator Forms

The GFRFT is defined for a graph GG with shift operator ZZ (e.g., adjacency AA or Laplacian LL) via its spectral decomposition Z=VJZV1Z = V J_Z V^{-1}. Two principal forms are employed:

a) Spectral (Fractional Power) Form:

Let FG=V1F_G = V^{-1} (GFT matrix), and FG=PJFP1F_G = P J_F P^{-1} where JFJ_F is a diagonal (or Jordan block) matrix. The fractional-order GFRFT is: FGα=PJFαP1F_G^{\alpha} = P J_F^{\alpha} P^{-1} where JFαJ_F^{\alpha} is obtained by raising each Jordan block or eigenvalue to power α\alpha.

b) Hyper-differential (Operator Exponential) Form:

The graph "coordinate" operator is defined as: DG2=12π(j2πlog(FG)+12I)D_G^2 = \frac{1}{2\pi} \left( \frac{j2}{\pi} \log(F_G) + \frac{1}{2}I \right) and the hyper-differential generator: T~G=jπ2[π(DG2+FGDG2FG1)12I]\widetilde{T}_G = -j\frac{\pi}{2} \left[ \pi(D_G^2 + F_G D_G^2 F_G^{-1}) - \frac{1}{2}I \right] Thus, the GFRFT is expressed as: FGα=exp(αT~G)F_G^{\alpha} = \exp(\alpha \widetilde{T}_G) Both forms yield a family of linear, additive, invertible, and (if FGF_G is unitary) orthonormal transforms parameterized by αR\alpha \in \mathbb{R} (Yan et al., 29 Jul 2025).

2. Fundamental Properties and Algebraic Structure

The GFRFT inherits key algebraic properties from the continuous fractional Fourier transform:

  • Linearity: FGα(ax+by)=aFGαx+bFGαyF_G^{\alpha}(a x + b y) = a F_G^{\alpha} x + b F_G^{\alpha} y
  • Additivity (Index additivity): FGα1FGα2=FGα1+α2F_G^{\alpha_1} F_G^{\alpha_2} = F_G^{\alpha_1 + \alpha_2}
  • Invertibility: (FGα)1=FGα(F_G^{\alpha})^{-1} = F_G^{-\alpha}
  • Orthonormality/Unitarity (if FGF_G is unitary): (FGα)HFGα=I(F_G^{\alpha})^H F_G^{\alpha} = I
  • Differentiability: The derivative with respect to α\alpha exists in closed form, FGα/α=T~GFGα\partial F_G^{\alpha}/\partial\alpha = \widetilde{T}_G F_G^{\alpha}, supporting gradient-based optimization in neural network layers (Yan et al., 29 Jul 2025).

3. Multi-dimensional and Product-graph Extensions

For multi-dimensional signals on Cartesian product graphs, the GFRFT generalizes as follows:

  • Multi-dimensional GFRFT (MGFRFT): On mm factor graphs Gi\mathcal{G}_i, Laplacian-based fractional eigenbases are constructed for each factor, and the transform on the product graph exploits tensor products and Kronecker operations:

f^α(1,,m)=n1,,nmf(n1,,nm)i=1m(κi(i)(ni))\widehat{f}_\alpha(\ell_1, \dots, \ell_m) = \sum_{n_1,\dots,n_m} f(n_1,\dots,n_m) \prod_{i=1}^m (\kappa^{(i)}_{\ell_i}(n_i))^*

with invertibility and energy preservation (Yan et al., 2021).

  • Bi-fractional and Kronecker Extensions: Two-dimensional bi-fractional GFRFTs (2D-GBFRFT) assign independent fractional orders to each dimension (α1,α2\alpha_1, \alpha_2), preserving separability and enabling definition by Kronecker products:

F2D(α1,α2)=FG2α2FG1α1F_{2D}^{(\alpha_1, \alpha_2)} = F_{G_2}^{\alpha_2} \otimes F_{G_1}^{\alpha_1}

Both grid search and differentiable learning jointly optimize fractional orders and spectral filters for Wiener-style denoising, outperforming single-order approaches in heterogeneous datasets (Wang et al., 13 Oct 2025).

  • Directed Graphs and SVD-based GFRFT: For directed graphs, SVD-based constructs allow fractional Laplacians via singular value decomposition, which maintain spectral concentration properties and fast Kronecker-product computations for multi-graph signals (Li et al., 4 Jun 2025, Yan et al., 2022).

4. Adaptive Fractional-order Selection and Learning Frameworks

A critical advancement is the embedding of the GFRFT order parameter as a trainable or optimizable variable inside end-to-end learning architectures. The gradient of FGαF_G^{\alpha} with respect to α\alpha allows backpropagation-based adaptation of both transform orders and filter coefficients, enabling:

  • Order learning in neural layers: L/α=L/FGα,T~GFGα\partial \mathcal{L}/\partial\alpha = \langle \partial \mathcal{L}/\partial F_G^{\alpha}, \widetilde{T}_G F_G^{\alpha} \rangle
  • Complexity reduction: The trainable approach reduces computational burden from O(N4T4)O(N^4 T^4) per grid point to O(N2T2)O(N^2 T^2) per iteration (plus a one-time O(N3)O(N^3) initialization), facilitating application to large graph domains (Yan et al., 29 Jul 2025).
  • Spectral filtering and classification: Gradient descent schemes adapt fractional orders and filter shapes in embedding and denoising pipelines, as demonstrated in spectral graph embedding and time-vertex Wiener filtering (Sheng et al., 4 Aug 2025, Wang et al., 13 Oct 2025).

5. Multiple-parameter and Angular Generalizations

The GFRFT admits further extension to multiple-parameter settings and angular-spectral control:

  • Multiple-parameter GFRFT (MPGFRFT): Distinct fractional orders per frequency (order vector a\mathbf{a}) enable fine-grained adaptation to non-stationary or structurally diverse signals. MPGFRFT-I maintains unitarity and invertibility for a diagonalizable FF:

FIa=Vdiag(μ0a0,,μN1aN1)V1F_I^{\mathbf{a}} = V \mathrm{diag}(\mu_0^{a_0}, \dots, \mu_{N-1}^{a_{N-1}}) V^{-1}

Joint parameter learning supports adaptive spectral compression, encryption, denoising, and nonlinear spectral signatures (Cui et al., 31 Jul 2025).

  • Angular Graph Fractional Fourier Transform (AGFRFT): Unifies fractional-order and basis-rotation controls via a differentiable rotation matrix R(θ)R(\theta):
    • Type I: Fθα,I=UR(θ)ΛαR(θ)UF_\theta^{\alpha,I} = U R(\theta) \Lambda^\alpha R(-\theta) U^\top
    • Type II: Fθα,II=(R(θ)Fα)F_\theta^{\alpha,II} = (R(\theta) F^{-\alpha})^\top
    • Both variants are unitary, invertible, and smoothly tunable, supporting learnable joint parameterization. AGFRFT surpasses GFRFT and AGFT in spectral concentration and denoising performance on real-world graph data (Zhao et al., 20 Nov 2025).

6. Computational Algorithms and Application Domains

Efficient algorithms for the GFRFT are grounded in matrix functional analysis:

  • Eigendecomposition-based implementation: O(N3)O(N^3) for spectral decomposition, O(N2)O(N^2) for matrix-vector multiplications.
  • Kronecker, tensor, and SVD-based fast methods: Exploit separability and product graph structures to reduce complexity to O(i=1mNi3)O(\sum_{i=1}^m N_i^3) for mm-factor graphs, with further reduction via Kronecker products in multi-dimensional contexts (Yan et al., 2021, Li et al., 4 Jun 2025).
  • Polynomial approximation and Krylov subspace methods: Chebyshev or Lanczos polynomial approximations scale linearly with the number of graph edges, enabling GFRFT application to large sparse graphs (Ge et al., 2022, Zhao et al., 20 Nov 2025).

Application domains span:

  • Graph signal denoising and Wiener filtering: Adaptive fractional-order transforms separate signal and noise in time-vertex and multi-dimensional graphs, increasing PSNR and reducing MSE relative to classical methods (Yan et al., 29 Jul 2025, Wang et al., 13 Oct 2025).
  • Sparse spectral representation and compression: Improved data compression at ultralow ratios, with sparser, more energy-compact spectral representations (Yan et al., 2021).
  • Spectral embedding and feature extraction: Enhanced expressiveness and classification accuracy in graph learning tasks via fractional-domain embeddings (Sheng et al., 4 Aug 2025).
  • Image encryption and anomaly detection: MPGFRFT-based encryption schemes exhibit high key-sensitivity and resistance to brute-force attack on image graphs (Cui et al., 31 Jul 2025). Vertex-frequency analysis via multi-windowed GFRFT frames reveals fine structural features and optimizes anomaly detection (Shang et al., 28 Dec 2024).

7. Connections, Extensions, and Research Directions

The GFRFT formalism unifies spectral analysis on graphs, time-vertex domains, and tensor products of Hilbert spaces, with seamless reduction to classical GFT and FRFT. Recent research establishes its role as the foundational block for

The literature highlights algorithmic and mathematical challenges in fractional operator approximation, parameter learning, and computational scalability; ongoing work is pushing toward differentiable, adaptive, and task-driven GFRFT frameworks capable of robust spectral analysis across diverse graph-based datasets.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Graph Fractional Fourier Transform (GFRFT).