Papers
Topics
Authors
Recent
2000 character limit reached

FDIFF-PINN: Fractional Differential Equation PINNs

Updated 20 December 2025
  • FDIFF-PINN is a framework that integrates deep neural networks with specialized quadrature and discretization techniques to robustly solve fractional differential equations.
  • It combines automatic differentiation, spectral methods, and Monte Carlo sampling to handle Caputo, Riemann–Liouville, and other fractional operators in complex PDE problems.
  • Applications include battery state estimation, anomalous diffusion, and high-dimensional Fokker–Planck equations, demonstrating noise resilience and precise parameter recovery.

A Fractional Differential Equation Physics-Informed Neural Network (FDIFF-PINN) is a class of scientific machine learning frameworks that leverages deep neural network function approximators to solve direct, inverse, and parametric problems governed by fractional differential equations (FDEs). Distinct from integer-order PINNs, FDIFF-PINNs are constructed for equations involving Caputo, Riemann–Liouville, Grünwald–Letnikov, conformable, or Riesz derivatives—PDEs modeling memory, nonlocality, anomalous diffusion, or heavy-tailed processes—by hybridizing neural architectures with specialized quadrature or discretization for fractional operators. FDIFF-PINNs unify advances in automatic differentiation, fractional calculus, spectral and weak formulations, and stochastic representations to yield PDE-constrained learning algorithms applicable from deterministic subdiffusion to high-dimensional fractional Fokker–Planck–Lévy equations, parametric battery modeling, and random-field SFPDEs.

1. Mathematical Formulation and Governing Equations

Fractional PDEs generally have the form: Dtβu(x,t)+Lxαu(x,t)+M[u;θp]=f(x,t),(x,t)Ω×(0,T]\mathcal D_t^\beta u(x,t) + \mathcal L_x^{\alpha} u(x,t) + \mathcal M[u;\theta_p] = f(x,t),\quad (x,t)\in \Omega\times(0,T] where Dtβ\mathcal D_t^\beta is a time-fractional derivative (Caputo, Riemann–Liouville, conformable, G-L, etc.) of order 0<β<10<\beta<1 or 0<α<20<\alpha<2 for space, Lxα\mathcal L_x^{\alpha} is a non-local spatial operator (Riesz/Caputo/tempered Laplacian), M\mathcal M encodes lower-order or nonlinear terms, and f(x,t)f(x,t) is the source (potentially black-box or random). Boundary and initial conditions are problem-specific; for Dirichlet/zero-flux, hard/soft constraints or network reparameterization are used to satisfy them (Pang et al., 2018, Ma et al., 2023, Hu et al., 17 Jun 2024, Sivalingam et al., 28 Mar 2025).

Specific examples include:

  • Time-fractional diffusion (Caputo form):

1Γ(1α)0t(ts)αsu(x,s)ds=(D(u)u(x,t))+f(x,t)\frac{1}{\Gamma(1-\alpha)}\int_0^t (t-s)^{-\alpha}\partial_s u(x,s)\,ds = \nabla\cdot(D(u)\nabla u(x,t)) + f(x,t)

2. Discretization and Numerical Evaluation of Fractional Operators

Standard PINN automatic differentiation cannot natively handle fractional derivatives due to their functional nonlocality. FDIFF-PINNs circumvent this as follows:

  • Caputo/Grünwald–Letnikov (G-L) Time Derivatives:

Dtαu(tn)(Δt)αm=0nwm(α)u(tnm),wm(α)=(1)mΓ(α+1)Γ(m+1)Γ(αm+1)D_t^\alpha u(t_n) \approx (\Delta t)^{-\alpha} \sum_{m=0}^n w_m^{(\alpha)} u(t_{n-m}),\quad w_m^{(\alpha)} = (-1)^m \frac{\Gamma(\alpha+1)}{\Gamma(m+1)\Gamma(\alpha-m+1)}

Embedded either in the computation graph (Ghaderi et al., 29 Oct 2025, Dang et al., 13 Dec 2025) or via finite-difference quadrature (L1/L2 schemes) (Thakur et al., 6 Jun 2024, Pang et al., 2018).

  • Space-Fractional Laplacian (Riesz/Tempered):
    • Directional Grünwald–Letnikov: Discretizes via dense convolutional stencils along angular directions (auxiliary grid) (Pang et al., 2018).
    • Monte Carlo (MC-fPINN/MC-tfPINN): High-dimensional spatial integrals are split into 1D radial (handled via Gauss–Jacobi/Laguerre quadrature) and (d1)(d-1)-dimensional spherical MC sampling, enabling up to 10510^5 dimensions (Hu et al., 17 Jun 2024).
    • Score-based fractionalization: Integrate-by-parts formulations introduce a learnable "fractional score function" to locally reparameterize the nonlocal operator, then solve an equivalent second-order PDE (Hu et al., 17 Jun 2024).
  • Conformable derivatives: These admit a chain-rule-friendly form Tα[f](t)=t1αf(t)T_\alpha[f](t) = t^{1-\alpha} f'(t), fully compatible with AD (Ye et al., 2021).
  • Spectral methods: Global Legendre (or Chebyshev) expansions in space (and in time for multi-term FDEs), with NN learning modal coefficients, yielding mesh-free, fast convergence for smooth solutions (Sivalingam et al., 28 Mar 2025).

3. Neural Network Architectures and Loss Construction

Most FDIFF-PINN approaches leverage multi-layer perceptrons (MLPs) or recurrent networks (for sequential data) as universal surrogates for u(x,t)u(x,t), auxiliary coefficients (e.g., D(u)D(u), a(x)a(x)), or bi-orthogonal stochastic modes. Variants include:

  • Hard Constraint Networks: Multiply base NN by spatial masks (e.g., ρ(x)=max(0,1x2)\rho(x) = \max(0,1-\|x\|^2)) and/or tt to enforce uΩ=0u|_{\partial \Omega}=0 and u(x,0)=u0(x)u(x,0)=u_0(x) by construction (Hu et al., 17 Jun 2024, Pang et al., 2018).
  • Multi-output Architectures: Separate networks for solution, coefficients, fractional order, and uncertainty modes (Ma et al., 2023, Thakur et al., 6 Jun 2024).
  • Spectral Coefficient Nets: DNN learns time and parameter dependence of spectral expansion coefficients (Sivalingam et al., 28 Mar 2025).

The loss function generically aggregates: Ltotal=λresidLPDE+λdataLdata+λIC/BCLIC/BC+λpriorLprior\mathcal L_\mathrm{total} = \lambda_\text{resid} \mathcal L_\text{PDE} + \lambda_\text{data} \mathcal L_\text{data} + \lambda_\text{IC/BC} \mathcal L_\text{IC/BC} + \lambda_\text{prior} \mathcal L_\text{prior}

  • LPDE\mathcal L_\text{PDE}: Mean-squared residuals of the FDE at collocation points, fractional discretization included (Hu et al., 17 Jun 2024, Pang et al., 2018).
  • Ldata\mathcal L_\text{data}: MSE on available observational data (SOC in batteries, concentration, etc.) (Dang et al., 13 Dec 2025, Thakur et al., 6 Jun 2024).
  • LIC/BC\mathcal L_\text{IC/BC}: Strong or weak imposition of initial/boundary data.
  • Lprior\mathcal L_\text{prior}: Parameter or coefficient priors for parameter identification or regularization (Yan et al., 2023).

Inverse problems treat physical parameters (fractional order α\alpha, conductivity kk, coefficient fields) as trainable variables, backpropagating through all loss terms (Ghaderi et al., 29 Oct 2025, Dang et al., 13 Dec 2025, Thakur et al., 6 Jun 2024, Yan et al., 2023).

4. Algorithmic Workflow and Training Strategies

A generic workflow is as follows:

  1. Sampling: Uniform/random selection of collocation points for the PDE (space–time), and additional data or sensor points for inverse tasks.
  2. Forward Pass: For each batch, evaluate NN predictions at required points, compute all AD and discretized derivatives.
  3. Residual and Loss Evaluation: Compute the physics-informed residuals, data terms, and any regularization according to the selected loss function.
  4. Backward Pass: Gradients w.r.t. all NN weights and trainable PDE parameters (fractional order, diffusion, etc.) computed by AD.
  5. Optimizer Steps: Usually Adam for initial epochs, transitioning to second-order (L-BFGS-B) for fine-tuning. Learning rates and schedules are often problem-dependent (Ghaderi et al., 29 Oct 2025, Pang et al., 2018, Ma et al., 2023).
  6. Postprocessing: For Laplace-based or score-based FDIFF-PINNs, post-training inversion or quasi-static projection may be needed (Laplace inversion via Stehfest, spectral coefficient evaluation (Sivalingam et al., 28 Mar 2025), or score function root-finding (Hu et al., 17 Jun 2024)).

A condensed pseudocode for high-dimensional MC/Quadrature-based FDIFF-PINNs appears below (from (Hu et al., 17 Jun 2024)):

1
2
3
4
5
6
7
8
9
for epoch in range(N_epochs):
    # Sample collocation, initial/boundary, (and data) points
    for each collocation point:
        # Evaluate u_pred, compute time/space-fractional terms via quadrature and/or MC
        # Form physics residuals, IC/BC residuals
    Compute losses L_total
    Backpropagate via AD
    Update θ, and trainable physical parameters

5. Representative Applications and Numerical Benchmarks

FDIFF-PINNs have been rigorously validated across benchmark and real-world problems:

Benchmark Summaries

Paper Problem Class Dimension(s) Operator Type Method/Discretization Rel. Error(s)
(Pang et al., 2018) Forward/inverse ADE 1D/2D/3D Space/Time GL, L1 10310^{-3}10410^{-4}
(Hu et al., 17 Jun 2024) High-dim Poisson/diffusion up to d=105d=10^5 Riesz/tempered MC, Gauss–Jacobi 10310^{-3}10210^{-2}
(Sivalingam et al., 28 Mar 2025) Parametric FDEs 1D Time Legendre-Galerkin + DNN 10310^{-3}10410^{-4}
(Dang et al., 13 Dec 2025) Lithium-ion battery 1D (time) G-L time G-L, MLP/RNN/LSTM MSE reduced 30–80% wrt baseline
(Thakur et al., 6 Jun 2024) Inverse anomalous diffusion/viscoelasticity 2D/1D Caputo time L1-scheme, Swish NN rel. err. <10% under 25% noise

Further, specialized strategies (Laplace-fPINN, score-PINN, spectral coefficient learning, bi-orthogonal expansions) are documented for fractional heat conduction (Ghaderi et al., 29 Oct 2025), subdiffusion (Yan et al., 2023), high-dimensional Fokker–Planck–Lévy (Hu et al., 17 Jun 2024), and stochastic SFPDEs (Ma et al., 2023).

Notable Outcomes

  • Generalization and Physical Consistency: FDIFF-PINNs with physics-informed loss yield models robust to severe measurement noise, generalize across parametric ranges, and recover parameters/fractional orders with precision approaching 0.1% (Ghaderi et al., 29 Oct 2025, Thakur et al., 6 Jun 2024, Dang et al., 13 Dec 2025).
  • Curse of Dimensionality: MC-fPINN and score-fPINN frameworks mitigate exponential scaling of curse of dimensionality, enabling tractable training and inference in d=103d=10^310510^5 (Hu et al., 17 Jun 2024, Hu et al., 17 Jun 2024).
  • Stochastic Fractional PDEs: BO-fPINN effectively manages mode-crossing and uncertainty quantification in SFPDEs, outperforming plain bi-orthogonal and gPC methods in high dimensions (Ma et al., 2023).

6. Theoretical Guarantees, Limitations, and Extensions

  • Convergence: Whenever spectral discretizations are used, convergence rates are theoretically quantified, decomposing error into spectral truncation, DNN approximation, and generalization error terms, all vanishing in appropriate limits (Sivalingam et al., 28 Mar 2025).
  • Limitations: FDIFF-PINNs require careful balancing of hybrid loss weights, memory lengths/α\alpha for G-L discretization, and may exhibit reduced stability for ill-posed or highly dynamic regimes (e.g., complex battery loading in (Dang et al., 13 Dec 2025)). Spatial operators relying on finite differences are not fully mesh-free and may underperform for non-smooth solutions (Ma et al., 2023).
  • Computational burden: Nonlocal memory in time (Caputo/G-L) and space (Riesz, tempered) can become a bottleneck for large histories or high accuracy, but MC/quadrature hybridization yields practical algorithms (Hu et al., 17 Jun 2024).
  • Future directions: Promising avenues include domain decomposition (XPINN), operator learning (DeepONet, Fourier-NO), adaptive loss balancing, transfer learning for fractional order, probabilistic (Bayesian) extensions, and efficient implementation of meshless fractional operators (Hu et al., 17 Jun 2024, Hu et al., 17 Jun 2024, Ma et al., 2023, Dang et al., 13 Dec 2025).

7. Variants, Modalities, and Methodological Synthesis

FDIFF-PINNs are not a monolithic architecture but a broad family unified by physics-based loss and explicit numerical accommodation for fractional operators. Variants include:

  • Laplace-fPINN: Transforms Caputo-type time-fractional PDEs to elliptic problems in (x,s)(x,s)-Laplace domain, enabling use of classic PINNs and avoiding time-history convolutions (Yan et al., 2023).
  • Score-fPINN: Transforms fractional spatial operators to local PDEs for the log-likelihood by introducing a fractional score (via integration by parts), facilitating mesh-free high-dimensional solution (Hu et al., 17 Jun 2024).
  • Spectral PINN: Employs trial solutions as NN-generated linear combinations of global basis functions (e.g., Legendre), eliminating spatial automatic differentiation and delivering provably optimal spectral convergence in parametric settings (Sivalingam et al., 28 Mar 2025).
  • Bi-orthogonal fPINN: Decomposes stochastic FDE solutions into modal expansions, with all bi-orthogonal constraints imposed weakly via the loss, allowing for both forward/inverse stochastic solution and transfer learning (Ma et al., 2023).
  • Conformable PINN: For equations with conformable derivatives, exploits the AD-friendly structure of the operator to preserve full meshless convenience (Ye et al., 2021).

Each branch can be adapted for inverse parameter identification, data assimilation, discovery of anomalous exponents, or uncertainty quantification, as dictated by problem requirements (Ghaderi et al., 29 Oct 2025, Thakur et al., 6 Jun 2024).


References:

  • (Pang et al., 2018) fPINNs: Fractional Physics-Informed Neural Networks
  • (Ye et al., 2021) Deep neural network methods for solving forward and inverse problems of time fractional diffusion equations with conformable derivative
  • (Ma et al., 2023) Bi-orthogonal fPINN: A physics-informed neural network method for solving time-dependent stochastic fractional PDEs
  • (Yan et al., 2023) Laplace-fPINNs: Laplace-based fractional physics-informed neural networks for solving forward and inverse problems of subdiffusion
  • (Hu et al., 17 Jun 2024) Score-fPINN: Fractional Score-Based Physics-Informed Neural Networks for High-Dimensional Fokker-Planck-Levy Equations
  • (Hu et al., 17 Jun 2024) Tackling the Curse of Dimensionality in Fractional and Tempered Fractional PDEs with Physics-Informed Neural Networks
  • (Thakur et al., 6 Jun 2024) Physics-Informed Neural Network based inverse framework for time-fractional differential equations for rheology
  • (Sivalingam et al., 28 Mar 2025) Spectral coefficient learning physics informed neural network for time-dependent fractional parametric differential problems
  • (Ghaderi et al., 29 Oct 2025) Equation Discovery, Parametric Simulation, and Optimization Using the Physics-Informed Neural Network (PINN) Method for the Heat Conduction Problem
  • (Dang et al., 13 Dec 2025) Fractional Differential Equation Physics-Informed Neural Network and Its Application in Battery State Estimation

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Fractional Differential Equation Physics-Informed Neural Network (FDIFF-PINN).