Papers
Topics
Authors
Recent
Search
2000 character limit reached

Graph Laplacian Filter

Updated 3 March 2026
  • Graph Laplacian filter is a spectral operator defined by applying a function to the Laplacian matrix, enabling low-pass, band-pass, or general filtering of graph signals.
  • It leverages efficient polynomial and Krylov subspace approximations, such as Chebyshev methods, to scale filtering operations on large and sparse graphs.
  • Recent extensions—including negative weights, multiscale filter banks, and nonlinear designs—enhance edge preservation, robustness, and adaptability in various applications.

A graph Laplacian filter is a graph-based signal processing operator defined by a function of the graph Laplacian, which acts as a low-pass, band-pass, or general spectral filter on signals supported on the nodes of a graph. The graph Laplacian—either in its combinatorial or normalized form—encodes the structural topology of the graph and provides the underlying Fourier-like basis for spectral analysis. Laplacian filters are critical for denoising, feature extraction, compression, and as convolutional layers in graph neural networks, with efficient scalable implementations using polynomial and Krylov subspace methods. Recent developments extend the framework to edge-enhancing filters using negative weights, robust designs under topology perturbations, and multichannel/multiscale filter banks for complex data modalities.

1. Mathematical Foundations of Graph Laplacian Filters

Let G=(V,E)G=(V,E) be a weighted, undirected graph with n=Vn=|V| vertices, weight matrix WRn×nW\in\mathbb{R}^{n\times n}, and degree matrix D=diag(d1,,dn)D=\operatorname{diag}(d_1,\dots,d_n), di=jwijd_i=\sum_j w_{ij}. Two standard Laplacian matrices are used:

  • Combinatorial Laplacian: L=DWL = D - W
  • Symmetric normalized Laplacian: Lsym=ID1/2WD1/2L_\mathrm{sym} = I - D^{-1/2} W D^{-1/2}

Both are real, symmetric, and positive semi-definite under wij0w_{ij}\geq 0, but constructions with negative weights are possible as long as the matrix remains (strictly) diagonally dominant to preserve L0L\succeq 0 (Knyazev, 2015).

The eigendecomposition L=UΛUTL = U \Lambda U^T provides the graph Fourier basis n=Vn=|V|0 and the Laplacian spectrum n=Vn=|V|1. Any graph signal n=Vn=|V|2 can be expanded as n=Vn=|V|3 with n=Vn=|V|4.

A graph Laplacian filter is defined as n=Vn=|V|5, where n=Vn=|V|6 is a function applied to the Laplacian eigenvalues. Applying n=Vn=|V|7 to n=Vn=|V|8 multiplies each spectral component n=Vn=|V|9 by WRn×nW\in\mathbb{R}^{n\times n}0.

2. Spectral Design and Filter Classes

2.1 Classical and Low-Pass Designs

The bilateral filter is a one-hop spectral filter WRn×nW\in\mathbb{R}^{n\times n}1, acting as a low-pass filter (Gadde et al., 2013). More generally, graph Laplacian regularization leads to smoothness functionals WRn×nW\in\mathbb{R}^{n\times n}2, and regularized denoising solutions of the form

WRn×nW\in\mathbb{R}^{n\times n}3

where WRn×nW\in\mathbb{R}^{n\times n}4 is a chosen regularization matrix function (often a positive monotonic function of WRn×nW\in\mathbb{R}^{n\times n}5), e.g., WRn×nW\in\mathbb{R}^{n\times n}6, WRn×nW\in\mathbb{R}^{n\times n}7, or WRn×nW\in\mathbb{R}^{n\times n}8, producing Laplacian, Tikhonov, and diffusion (heat) kernel filters, respectively (Salim et al., 2020, Egilmez et al., 2018).

2.2 Polynomial and FIR Graph Filters

To enable scalable computation, filters are approximated by polynomials: WRn×nW\in\mathbb{R}^{n\times n}9 This provides a finite impulse response (FIR) graph filter with spectral response D=diag(d1,,dn)D=\operatorname{diag}(d_1,\dots,d_n)0, where coefficients D=diag(d1,,dn)D=\operatorname{diag}(d_1,\dots,d_n)1 determine the passband/stopband characteristics (Kruzick et al., 2018, Knyazev et al., 2015). Chebyshev polynomial approximation provides numerically stable and efficient polynomial expansions, with recurrences that avoid explicit eigendecomposition (Gadde et al., 2013, Knyazev et al., 2015).

2.3 Krylov Subspace and Accelerated Filtering

Krylov subspace methods (e.g., Conjugate Gradient, Lanczos, LOBPCG) yield accelerated polynomial filters constructed adaptively for the specific input and graph spectrum. Given a signal D=diag(d1,,dn)D=\operatorname{diag}(d_1,\dots,d_n)2, the D=diag(d1,,dn)D=\operatorname{diag}(d_1,\dots,d_n)3th Krylov subspace is D=diag(d1,,dn)D=\operatorname{diag}(d_1,\dots,d_n)4, and filter outputs are projected onto this subspace. Lanczos-adaptive filters converge rapidly in the presence of Laplacian spectral gaps and offer higher accuracy per matrix-vector product than classical Chebyshev filters, especially on large and sparse graphs (Susnjara et al., 2015, Knyazev et al., 2015).

3. Generalizations: Negative Weights, Nonlinear Laplacians, and Robust Designs

3.1 Edge-Enhancing Filters with Negative Weights

Standard Laplacian filters assume D=diag(d1,,dn)D=\operatorname{diag}(d_1,\dots,d_n)5, but selective introduction of negative weights at known edges can enhance contrasts by repelling Laplacian eigenmodes at those locations. The result is edge-enhancing rather than edge-smoothing behavior, significantly improving jump preservation in denoising and segmentation tasks with minimal overshoot and higher PSNR/SSIM, provided diagonal dominance is enforced to keep D=diag(d1,,dn)D=\operatorname{diag}(d_1,\dots,d_n)6 (Knyazev, 2015).

3.2 Nonlinear and p-Laplacian Graph Filters

Extending beyond D=diag(d1,,dn)D=\operatorname{diag}(d_1,\dots,d_n)7, the discrete D=diag(d1,,dn)D=\operatorname{diag}(d_1,\dots,d_n)8-Laplacian defines nonlinear eigenproblems and polynomial filter families that adaptively localize low- and high-pass effects. The D=diag(d1,,dn)D=\operatorname{diag}(d_1,\dots,d_n)9-Laplacian operator supports anisotropic, adaptive filtering regimes, enabling effective message passing on both homophilic and heterophilic graphs and robustifying graph neural networks against topology and label noise (Fu et al., 2021).

3.3 Robust Filtering under Graph Perturbations

Graph Laplacian filters can be rendered robust against random or systematic graph topology perturbations by deriving closed-form perturbation expansions of eigenvalues/eigenvectors and modifying filter spectral masks and polynomial coefficients accordingly (Testa et al., 2024). Joint design minimizes filter deviation and output estimation error under both edge perturbations and noisy inputs, using explicit expectation operations over the perturbed spectrum.

4. Multichannel, Multiscale, and Filter Bank Extensions

4.1 Critical-Sampled and Oversampled Filter Banks

An di=jwijd_i=\sum_j w_{ij}0-channel filter bank splits the Laplacian spectrum into di=jwijd_i=\sum_j w_{ij}1 subbands, applies corresponding bandpass filters di=jwijd_i=\sum_j w_{ij}2, and downsamples on corresponding uniqueness vertex sets. Reconstruction is either exact (small graphs) or approximate via fast polynomial filtering and interpolation (Li et al., 2016). Efficient sampling is achieved via non-uniform sketching (Hutchinson) and Chebyshev polynomial filtering.

For joint time-vertex or higher-dimensional data, oversampled Laplacians and di=jwijd_i=\sum_j w_{ij}3-coloring strategies enable the preservation of all temporal and spatial edges in bipartite decompositions, supporting redundant multiresolution representations with provable perfect reconstruction and improved denoising (Zhang et al., 14 Nov 2025).

4.2 Spline, Ideal, and Butterworth Spectral Filters

Two-channel filter banks with analysis and synthesis filters specified in the graph Fourier (Laplacian spectrum) domain allow flexible shaping of subbands using polynomial spline, ideal, or Butterworth functions. Novel spectral domain constructions—such as the SGFBSS—achieve critical sampling, exact PR, and efficient sparse implementations, outperforming earlier vertex-domain or redundant multiscale graph filter architectures (Miraki et al., 2020).

5. Algorithmic and Implementation Considerations

Computational Complexity

  • Spectral methods: di=jwijd_i=\sum_j w_{ij}4 (full eigendecomposition)—only feasible for small graphs.
  • Polynomial methods: di=jwijd_i=\sum_j w_{ij}5 for degree-di=jwijd_i=\sum_j w_{ij}6 Chebyshev or Lanczos filtering, scalable to large, sparse graphs (Susnjara et al., 2015, Knyazev et al., 2015).
  • Krylov methods: di=jwijd_i=\sum_j w_{ij}7 for di=jwijd_i=\sum_j w_{ij}8-step basis, with memory di=jwijd_i=\sum_j w_{ij}9; exploit spectral adaptation for faster convergence on clustered spectra (Susnjara et al., 2015).
  • Multi-scale and filter banks: L=DWL = D - W0 if sharing polynomial bases, L=DWL = D - W1 for reconstruction via conjugate gradient interpolation (Li et al., 2016).
  • Sparse synthesis: Spectral filter banks can achieve synthesis cost L=DWL = D - W2 via block-diagonal/anti-diagonal spectral domain operations (Miraki et al., 2020).

6. Applications, Practical Impact, and Extensions

Graph Laplacian filters underpin state-of-the-art techniques in denoising, compression, semi-supervised learning, segmentation, clustering, and graph neural networks. Practical deployments demonstrate:

Ongoing research encompasses automated negative-weight placement, extension to directed graphs, scalable implementations for massive graphs, learning optimal filters from data, and integration into learning-based frameworks for robust, explainable graph representation learning.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Graph Laplacian Filter.