Graph Laplacian Filter
- Graph Laplacian filter is a spectral operator defined by applying a function to the Laplacian matrix, enabling low-pass, band-pass, or general filtering of graph signals.
- It leverages efficient polynomial and Krylov subspace approximations, such as Chebyshev methods, to scale filtering operations on large and sparse graphs.
- Recent extensions—including negative weights, multiscale filter banks, and nonlinear designs—enhance edge preservation, robustness, and adaptability in various applications.
A graph Laplacian filter is a graph-based signal processing operator defined by a function of the graph Laplacian, which acts as a low-pass, band-pass, or general spectral filter on signals supported on the nodes of a graph. The graph Laplacian—either in its combinatorial or normalized form—encodes the structural topology of the graph and provides the underlying Fourier-like basis for spectral analysis. Laplacian filters are critical for denoising, feature extraction, compression, and as convolutional layers in graph neural networks, with efficient scalable implementations using polynomial and Krylov subspace methods. Recent developments extend the framework to edge-enhancing filters using negative weights, robust designs under topology perturbations, and multichannel/multiscale filter banks for complex data modalities.
1. Mathematical Foundations of Graph Laplacian Filters
Let be a weighted, undirected graph with vertices, weight matrix , and degree matrix , . Two standard Laplacian matrices are used:
- Combinatorial Laplacian:
- Symmetric normalized Laplacian:
Both are real, symmetric, and positive semi-definite under , but constructions with negative weights are possible as long as the matrix remains (strictly) diagonally dominant to preserve (Knyazev, 2015).
The eigendecomposition provides the graph Fourier basis 0 and the Laplacian spectrum 1. Any graph signal 2 can be expanded as 3 with 4.
A graph Laplacian filter is defined as 5, where 6 is a function applied to the Laplacian eigenvalues. Applying 7 to 8 multiplies each spectral component 9 by 0.
2. Spectral Design and Filter Classes
2.1 Classical and Low-Pass Designs
The bilateral filter is a one-hop spectral filter 1, acting as a low-pass filter (Gadde et al., 2013). More generally, graph Laplacian regularization leads to smoothness functionals 2, and regularized denoising solutions of the form
3
where 4 is a chosen regularization matrix function (often a positive monotonic function of 5), e.g., 6, 7, or 8, producing Laplacian, Tikhonov, and diffusion (heat) kernel filters, respectively (Salim et al., 2020, Egilmez et al., 2018).
2.2 Polynomial and FIR Graph Filters
To enable scalable computation, filters are approximated by polynomials: 9 This provides a finite impulse response (FIR) graph filter with spectral response 0, where coefficients 1 determine the passband/stopband characteristics (Kruzick et al., 2018, Knyazev et al., 2015). Chebyshev polynomial approximation provides numerically stable and efficient polynomial expansions, with recurrences that avoid explicit eigendecomposition (Gadde et al., 2013, Knyazev et al., 2015).
2.3 Krylov Subspace and Accelerated Filtering
Krylov subspace methods (e.g., Conjugate Gradient, Lanczos, LOBPCG) yield accelerated polynomial filters constructed adaptively for the specific input and graph spectrum. Given a signal 2, the 3th Krylov subspace is 4, and filter outputs are projected onto this subspace. Lanczos-adaptive filters converge rapidly in the presence of Laplacian spectral gaps and offer higher accuracy per matrix-vector product than classical Chebyshev filters, especially on large and sparse graphs (Susnjara et al., 2015, Knyazev et al., 2015).
3. Generalizations: Negative Weights, Nonlinear Laplacians, and Robust Designs
3.1 Edge-Enhancing Filters with Negative Weights
Standard Laplacian filters assume 5, but selective introduction of negative weights at known edges can enhance contrasts by repelling Laplacian eigenmodes at those locations. The result is edge-enhancing rather than edge-smoothing behavior, significantly improving jump preservation in denoising and segmentation tasks with minimal overshoot and higher PSNR/SSIM, provided diagonal dominance is enforced to keep 6 (Knyazev, 2015).
3.2 Nonlinear and p-Laplacian Graph Filters
Extending beyond 7, the discrete 8-Laplacian defines nonlinear eigenproblems and polynomial filter families that adaptively localize low- and high-pass effects. The 9-Laplacian operator supports anisotropic, adaptive filtering regimes, enabling effective message passing on both homophilic and heterophilic graphs and robustifying graph neural networks against topology and label noise (Fu et al., 2021).
3.3 Robust Filtering under Graph Perturbations
Graph Laplacian filters can be rendered robust against random or systematic graph topology perturbations by deriving closed-form perturbation expansions of eigenvalues/eigenvectors and modifying filter spectral masks and polynomial coefficients accordingly (Testa et al., 2024). Joint design minimizes filter deviation and output estimation error under both edge perturbations and noisy inputs, using explicit expectation operations over the perturbed spectrum.
4. Multichannel, Multiscale, and Filter Bank Extensions
4.1 Critical-Sampled and Oversampled Filter Banks
An 0-channel filter bank splits the Laplacian spectrum into 1 subbands, applies corresponding bandpass filters 2, and downsamples on corresponding uniqueness vertex sets. Reconstruction is either exact (small graphs) or approximate via fast polynomial filtering and interpolation (Li et al., 2016). Efficient sampling is achieved via non-uniform sketching (Hutchinson) and Chebyshev polynomial filtering.
For joint time-vertex or higher-dimensional data, oversampled Laplacians and 3-coloring strategies enable the preservation of all temporal and spatial edges in bipartite decompositions, supporting redundant multiresolution representations with provable perfect reconstruction and improved denoising (Zhang et al., 14 Nov 2025).
4.2 Spline, Ideal, and Butterworth Spectral Filters
Two-channel filter banks with analysis and synthesis filters specified in the graph Fourier (Laplacian spectrum) domain allow flexible shaping of subbands using polynomial spline, ideal, or Butterworth functions. Novel spectral domain constructions—such as the SGFBSS—achieve critical sampling, exact PR, and efficient sparse implementations, outperforming earlier vertex-domain or redundant multiscale graph filter architectures (Miraki et al., 2020).
5. Algorithmic and Implementation Considerations
Computational Complexity
- Spectral methods: 4 (full eigendecomposition)—only feasible for small graphs.
- Polynomial methods: 5 for degree-6 Chebyshev or Lanczos filtering, scalable to large, sparse graphs (Susnjara et al., 2015, Knyazev et al., 2015).
- Krylov methods: 7 for 8-step basis, with memory 9; exploit spectral adaptation for faster convergence on clustered spectra (Susnjara et al., 2015).
- Multi-scale and filter banks: 0 if sharing polynomial bases, 1 for reconstruction via conjugate gradient interpolation (Li et al., 2016).
- Sparse synthesis: Spectral filter banks can achieve synthesis cost 2 via block-diagonal/anti-diagonal spectral domain operations (Miraki et al., 2020).
6. Applications, Practical Impact, and Extensions
Graph Laplacian filters underpin state-of-the-art techniques in denoising, compression, semi-supervised learning, segmentation, clustering, and graph neural networks. Practical deployments demonstrate:
- Superior edge preservation and contrast by edge-enhancing negative weight filters (Knyazev, 2015).
- Improved SNR and visual quality in images, videos, and large-scale signals via polynomial and Krylov filters (Knyazev et al., 2015, Susnjara et al., 2015, Miraki et al., 2020).
- Adaptivity to changes in graph structure (e.g., robust communications, dynamic networks) (Testa et al., 2024, Yan et al., 2017).
- Fundamental multiscale and filter-bank architectures for joint or multidimensional graph signals, allowing interpretable decompositions and reconstruction (Li et al., 2016, Zhang et al., 14 Nov 2025).
Ongoing research encompasses automated negative-weight placement, extension to directed graphs, scalable implementations for massive graphs, learning optimal filters from data, and integration into learning-based frameworks for robust, explainable graph representation learning.