Papers
Topics
Authors
Recent
2000 character limit reached

Forward PINN for Mesh-Free PDE Simulation

Updated 24 November 2025
  • Forward PINN is a physics-informed neural network that solves forward PDEs by embedding governing equations into its residual-based loss function.
  • It employs deep neural networks to approximate fields in advection–dispersion problems, achieving relative L2 errors below 1% in benchmark tests.
  • The mesh-free approach circumvents classical discretization challenges, enabling robust performance in irregular domains and high-dimensional settings.

A Forward PINN (“Physics-Informed Neural Network”) is a machine learning model designed to solve forward problems in partial differential equations (PDEs) by encoding physics directly into the loss function. In these applications, the PINN is trained to approximate the solution of a known PDE given initial and boundary conditions, without relying on discretized meshes or explicit numerical stencils. The method replaces classical mesh-based solvers with deep neural networks, which can be trained to minimize residuals of the governing equations and satisfy constraints, offering flexibility in irregular domains and high-dimensional problems.

1. Mathematical Formulation of the Forward PINN

The forward PINN method addresses the transient advection–dispersion equation (ADE) for solute transport in porous media: ut+vuD2u=0,(x,t)Ω×(0,T]u_t + \mathbf{v}\cdot\nabla u - D \nabla^2 u = 0, \quad (\mathbf{x}, t) \in \Omega \times (0,T] subject to initial and boundary conditions: u(x,0)=u0(x) u(x,t)ΩD=gD(x,t) DunΩN=gN(x,t)\begin{aligned} u(\mathbf{x}, 0) &= u_0(\mathbf{x}) \ u(\mathbf{x}, t)\big|_{\partial \Omega_D} &= g_D(\mathbf{x}, t) \ -D\, \nabla u \cdot \mathbf{n}\big|_{\partial \Omega_N} &= g_N(\mathbf{x}, t) \end{aligned} Here, uu is the scalar concentration, v\mathbf{v} is known velocity, DD is the (possibly anisotropic) dispersion tensor, and Ω\Omega is the computational domain with Dirichlet and Neumann boundaries. The Peclét number (PePe) often characterizes the ratio of advection to dispersion.

2. Neural Network Architecture for Physics-Imposed Solution

In the forward ADE/Darcy context, separate deep neural networks (DNNs) are constructed for each field:

  • Conductivity DNN: K^(x;ψ)\hat K(\mathbf{x};\psi), typically 4×404\times 40 layers, tanh\tanh activations, interpolates known K(x)K(\mathbf{x}) values for heterogeneous hydraulic conductivity.
  • Hydraulic Head DNN: h^(x;γ)\hat h(\mathbf{x}; \gamma), 4×404\times 40 layers, tanh\tanh activations, solves [Kh]=0\nabla \cdot \left[ K \nabla h \right]=0.
  • Concentration DNN: u^(x,t;θ)\hat u(\mathbf{x}, t; \theta), 5×605\times 60 layers, tanh\tanh activations, solves the transient ADE.

Network inputs are the spatial and/or temporal coordinates; outputs are scalar field predictions. Weight initialization employs Xavier schemes to facilitate training stability.

3. Residual-based PINN Loss Construction

The core of the forward PINN is the residual-driven loss, integrating physics constraints into neural optimization: rf(x,t;θ)=u^t+vu^D2u^ rIC=u^(x,0)u0(x) rBC,D=u^gD rBC,N=Du^ngN\begin{aligned} r_f(\mathbf{x}, t; \theta) &= \hat u_t + \mathbf{v} \cdot \nabla \hat u - D \nabla^2 \hat u \ r_{IC} &= \hat u(\mathbf{x}, 0) - u_0(\mathbf{x}) \ r_{BC,D} &= \hat u - g_D \ r_{BC,N} &= -D \nabla \hat u \cdot \mathbf{n} - g_N \end{aligned} These are translated into a total mean-squared loss: J(θ)=wf1Nfi=1Nfrf2(xfi,tfi)+wIC1NICi=1NICrIC2(xICi)+wBC1NBCi=1NBCrBC2(xBCi,tBCi)J(\theta) = w_f\,\frac{1}{N_f} \sum_{i=1}^{N_f} r_f^2(\mathbf{x}_f^i, t_f^i) + w_{IC} \frac{1}{N_{IC}} \sum_{i=1}^{N_{IC}} r_{IC}^2(\mathbf{x}_{IC}^i) + w_{BC} \frac{1}{N_{BC}} \sum_{i=1}^{N_{BC}} r_{BC}^2(\mathbf{x}_{BC}^i, t_{BC}^i) where wIC=wBC=10×wfw_{IC}=w_{BC}=10 \times w_f is typical. Experimental settings often include Nf=20,000N_f=20,000–$200,000$ collocation points in space-time, NIC100N_{IC}\sim 100, and NBC200N_{BC}\sim 200 for 1D problems, with higher numbers in 2D.

Optional terms can be added for available measurement data: wm1Nmi=1Nm(u^(xmi,tmi)u(xmi,tmi))2w_m\,\frac{1}{N_m}\sum_{i=1}^{N_m}(\hat u(\mathbf{x}_m^i, t_m^i) - u^*(\mathbf{x}_m^i, t_m^i))^2

4. Training Algorithms and Hyperparameter Choices

The forward PINN is typically optimized in two stages:

  • Adam optimizer: 20,000 iterations, learning rate α=103\alpha=10^{-3}, mini-batch size 500.
  • L-BFGS-B optimizer: full-batch, until convergence, strict tolerance 10810^{-8}.

Weights in the loss function are set by physical scaling; wIC=wBC=10w_{IC}=w_{BC}=10 works robustly in forward ADEs. Collocation sampling is uniform or quasi-random (e.g., Sobol/Latin-Hypercube) to ensure adequate domain coverage.

5. Quantitative Benchmarks and Comparison to Classical Methods

The primary figure of merit is the relative L2L_2 error over a test grid: ϵ=uu^2u2\epsilon = \frac{\| u - \hat u \|_2}{\| u \|_2} Forward PINN results on benchmark ADEs include:

  • 1D, Pe=62.8: ϵ6×104\epsilon\approx6\times10^{-4}
  • 1D, Pe=628: <1%<1\% max pointwise error, sharp gradient recovery requires more network capacity
  • 2D, Pe=50–200: ϵ1.5×1032.6×103\epsilon\approx1.5\times10^{-3} - 2.6\times10^{-3}

The method remains accurate (<1%<1\% relative error) up to Pe=200, outperforming conventional Galerkin FEM and stabilized SUPG methods, particularly at large Pe where mesh-aligned methods suffer crosswind oscillations or tuning instability.

6. Advantages, Scalability, and Domain-specific Insights

PINN's main strengths for forward problems are:

  • Discretization-free: Avoids meshes, enabling easy application to irregular or high-dimensional domains.
  • Parameter-free stability: Does not require stabilization tuning, as with SUPG.
  • Robustness to mesh orientation: No crosswind artifacts.
  • Flexible enforcement of boundary/initial conditions.
  • Easy inclusion of measurement data.

Computational cost scales with the number of dimensions and problem stiffness (Pe); GPU acceleration and adaptive residual sampling can ameliorate scaling. Training typically converges in minutes to hours (GPU), and cost is competitive with mesh-refined PDE solvers for 2D ADEs.

7. Practical Summary and Outlook

The forward PINN framework enforces a mesh-free, residual-based loss incorporating the governing PDE and constraints. Standard architectures (4–5 tanh\tanh layers, 30–60 neurons) and a two-stage Adam → L-BFGS schedule suffice for robust training on challenging advection–dispersion systems. Relative L2L_2 errors below 1%1\% are achieved for Pe up to 200, with marked advantages over FEM in terms of stability and flexibility for complex geometries. This approach thus provides a powerful meshless alternative for forward PDE simulations, especially as computational resources and parallelism grow. Applications are well supported in both single-physics and coupled Darcy/ADE systems (He et al., 2020).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Forward PINN.