Fractional PINNs for FPDEs
- Fractional PINNs are neural solvers for FPDEs that integrate fractional derivatives with mesh-free deep learning to address nonlocal behavior.
- They employ discrete and Monte Carlo methods to approximate Caputo, Riemann–Liouville, and Riesz derivatives, ensuring accurate simulation on irregular domains.
- fPINNs enable efficient forward and inverse modeling in diverse fields such as physics, engineering, biotransport, and financial mathematics.
Fractional Physics-Informed Neural Networks (fPINNs) are a class of mesh-free, loss-driven deep learning solvers for fractional partial differential equations (FPDEs), extending classical PINNs to accommodate nonlocal operators such as the Caputo, Riemann–Liouville, or Riesz derivatives. They are designed to solve both forward and inverse FPDEs, including time- and space-fractional models on regular or irregular domains, with applications across physics, engineering, biotransport, rheology, ecology, and financial mathematics. Central to fPINNs is the systematic incorporation of fractional derivatives, which require discretization or stochastic approximation, together with neural networks enforcing the physical laws and auxiliary conditions.
1. Mathematical Formulation of fPINNs
Classical fPINNs consider FPDEs of the form
Here is the Caputo derivative of order ,
and contains spatial operators, e.g., the Riesz fractional Laplacian of order , assembled from left/right Riemann–Liouville derivatives. Boundary and initial conditions complete the system. fPINNs typically embed the solution as a fully-connected feedforward network, with the loss function constructed as a sum of weighted mean-square residuals of the governing FPDE and the constraint equations (Pang et al., 2018, Wang et al., 2024).
2. Numerical Approximation of Fractional Derivatives
Since automatic differentiation is inapplicable for nonlocal fractional operators, fPINNs employ discrete or sampling-based strategies:
- Finite difference (L1, Grünwald–Letnikov, shifted GL): Caputo derivatives are approximated by history weights , leading to history convolutions at every time step, with truncation error for uniform meshes (Shekarpaz et al., 2023, Pang et al., 2018). Directional discretizations are used for space-fractional Laplacians.
- Operational Matrix Methods: Assemble a lower-triangular matrix encoding all fractional difference weights at chosen grid points; fractional derivatives reduce to matrix–vector multiplications, significantly accelerating training (Taheri et al., 2024).
- Monte Carlo (MC-fPINN): Fractional integrals/Laplacians are recast as expectations with carefully-chosen probability distributions (e.g., Beta laws for Caputo time) and angular/radial decompositions (e.g., sampling on and radial intervals for Riesz Laplacian), thus bypassing grid-based quadrature and making high-dimensional problems tractable (Guo et al., 2022, Wang et al., 2024).
- Quadrature Improvement: Radial singularities are integrated by Gaussian quadrature (e.g., Gauss–Jacobi for the inner part), slashing variance and obviating sensitive hyperparameters (Hu et al., 2024, Li et al., 13 Jun 2025).
- Laplace Transform Methods (Laplace-fPINNs): Fractional time-derivatives are mapped to algebraic terms in Laplace space, where neural networks are trained in , followed by numerical Laplace inversion (Yan et al., 2023).
3. PINN Architecture, Loss Construction, and Training
A standard architecture involves hidden layers of width (typical –$10$, –$100$), activations, and optionally polynomial blocks or domain-aware enforcements. The loss function is
where the physics-informed residual includes discretized or MC-approximated fractional derivatives; and enforce constraints; captures data fitting if present (Wang et al., 2024, Pang et al., 2018, Guo et al., 2022). Weights can be adaptively tuned, and training typically proceeds via Adam followed by L-BFGS, possibly with learning rate decay and early stopping.
4. Advanced fPINN Frameworks
Multiple extensions address key limitations and expand applicability:
- GMC-PINN (General Monte Carlo PINN): Samples nodes near evaluation points according to generalized probability laws from fractional weights, adapting stencils naturally to boundaries and irregular domains. Block classification reduces cost/storage by up to (Wang et al., 2024).
- Multistage, Multiprecision fPINNs: Stage-wise networks successively resolve spectral error components, using arbitrary-precision arithmetic for weights (float64/128) to attain up to accuracy for subdiffusion (Xue et al., 28 May 2025).
- Transformed Diffusion-Wave fPINNs: Integration by parts transforms the Caputo derivative to a mesh-free expression avoiding shifted derivatives, permitting efficient MC or Gauss–Jacobi quadrature and facilitating adaptive sampling (e.g., RAD) (Li et al., 13 Jun 2025).
- Laplace-fPINN: Entire solution and physics loss evaluated in Laplace space, simplifying fractional time operators and making long-time integration efficient, with PINNs trained on and recast via Gaver–Stehfest after optimization (Yan et al., 2023).
- Score-fPINN: Fractional score functions transform fractional Fokker–Planck–Lévy equations into second-order, local PDEs, enabling scalable training to dimensions. The score can be learned via fractional score matching (FSM) if conditional distributions are known, or via a PINN-based sliced score matching otherwise (Hu et al., 2024).
- Bi-orthogonal fPINN: For stochastic FPDEs, the solution is represented using BO expansions and four neural networks, with weak-form constraints guaranteeing orthonormality and robust performance at eigenvalue crossings (Ma et al., 2023).
5. Applications and Performance
fPINNs address a diverse spectrum of FPDEs:
- Anomalous transport/diffusion: Variable-order and time/space-fractional diffusion, with robust handling of data noise ( relative error in learned coefficients constrained below ) (Thakur et al., 2024).
- Rheological models (viscoelasticity): Fractional Maxwell equations, experimental modulus recovery to relative error (Thakur et al., 2024).
- Inverse problems: Simultaneous inference of fractional orders, coefficients, and data-driven dynamics (e.g., COVID-19 SEIR modeling with Caputo–Hadamard derivatives, order inferred from data) (Cai et al., 2022, Pang et al., 2018).
- High-dimensional systems: MC-fPINN and quadrature-based variants efficiently solve up to (fractional Poisson, tempered operators), with sublinear error growth and improved computational efficiency (Hu et al., 2024, Sheng et al., 2024).
- Stochastic FPDEs: Weak-form bi-orthogonal fPINNs recover means, variances, and BO modes with errors even in strongly nonlinear, high-variance, or eigenvalue-crossing problems (Ma et al., 2023).
Representative accuracy/cost results (see Table):
| Method | Dimension | Rel. L² Error | Wall-time | Key Result/Feature |
|---|---|---|---|---|
| GMC-PINN, Halton MC | 3D | 3.18e-2 | ~minutes | Accurate on irregular domains |
| MC-fPINN [Guo et al.] | 10D+ | ~1e-2 | ~hours | Scales linearly in |
| Multistage fPINN | 1D/2D | 1e-7–1e-8 | ~minutes | With multiprecision, 2–3 stages |
| Laplace-fPINN | 2D/3D | ~3% | ~hours | Efficient for long-time/higher |
| Score-fPINN | 100D | 1–5% | 50 min | FPL equations, curse relief |
6. Advantages, Limitations, and Open Problems
- Advantages:
- Mesh-free, grid-less: Efficient for irregular domains, high-dimensional spaces, and nonlocal operators.
- NN surrogates handle black-box forcing, noisy data, and enable rapid inverse/problem design.
- Monte Carlo/fractional weight sampling and operational matrices provide scale-out and computational acceleration.
- Score-based methods and Laplace transforms remove nonlocal complexity for high-dimensional or time-dependent operators.
- Weak formulation and multi-network BO expansions guarantee robustness in stochastic PDEs with challenging eigenspectra.
- Limitations:
- Optimization error often dominates at large training sets or high-order discretization.
- Monte Carlo variance in sampling-based methods, especially for angular integrals in high , still requires innovation.
- Hyperparameter tuning (network size, MC sample count, weight penalty, collocation strategy) is empirical.
- Numerical stability for inversion (Laplace-fPINN), uncertainty quantification, and comprehensive error bounds remain open.
- Future Directions:
- Adaptive/physics-informed collocation and sampling.
- Integration of operator learning (DeepONets, Fourier-Nets).
- Rigorous convergence analysis and Bayesian calibration.
- Acceleration via parallelization, multi-GPU, and arbitrary precision autodiff.
7. Summary Table: Main fPINN Variants
| Variant | Fractional Derivative Handling | Key Innovation | Typical Accuracy |
|---|---|---|---|
| fPINN | Discrete history (GL/L1) | Hybrid AD/discretization | – |
| MC-fPINN | MC sampling (Beta, sphere) | Mesh-free high , unbiased loss | – (10D+) |
| GMC-PINN | General MC, local node blocks | Irregular domains, block reduction | 2–3 speed-up |
| Laplace-fPINN | Laplace transform and PINN | Time-derivative as algebraic term | 3\% (2D/3D, large) |
| Multistage fPINN | Multiprecision, multi-stage | errors, minimal points | with 4040 points |
| Score-fPINN | Score matching, LL transformation | Lifts curse in FPL, high | $1$– ( up to $100$) |
| BO-fPINN | Weak-form, bi-orthogonal modes | Stochastic PDEs, robust to crossing | (mean/variance, modes) |
In conclusion, fractional PINNs represent a flexible, generalizable family of neural solvers for FPDEs, integrating mesh-free numerical approximation, stochastic sampling, and physics-constrained loss functions, with innovations in Monte Carlo integration, operational matrices, Laplace transforms, fractional score functions, and bi-orthogonal weak-form loss. These advances address the curse of dimensionality, irregular domain geometry, and parameter inference—even in noisy, stochastic, or high-dimensional settings (Wang et al., 2024, Guo et al., 2022, Li et al., 13 Jun 2025, Hu et al., 2024, Sheng et al., 2024, Ma et al., 2023, Taheri et al., 2024, Xue et al., 28 May 2025, Thakur et al., 2024, Yan et al., 2023, Pang et al., 2018, Hu et al., 2024, Cai et al., 2022, Shekarpaz et al., 2023, Ye et al., 2021).