Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 75 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 97 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 455 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Physics-Informed Neural Networks

Updated 7 October 2025
  • Physics-Informed Neural Networks (PINNs) are deep learning models that incorporate governing PDEs and boundary conditions directly into the loss function to solve complex nonlinear problems.
  • They use automatic differentiation to enforce physical laws, enabling both forward modeling (predicting field evolutions) and inverse modeling (estimating parameters from noisy data).
  • PINNs have practical applications in fields like biomedical engineering, seismic analysis, and energy recovery, offering a mesh-free, efficient alternative to traditional numerical methods.

Physics-Informed Neural Networks (PINNs) are a computational framework in which deep neural networks are trained to solve or infer unknowns in nonlinear multiphysics problems by embedding the governing partial differential equations (PDEs), boundary conditions, and (in the case of inverse problems) unknown physical parameters directly into the network’s loss function. The methodology enables both forward modeling—predicting the evolution of field variables given known parameters and conditions—and inverse modeling—estimating underlying parameters or fields from partial, possibly noisy, observations. This class of methods offers significant advantages for problems involving nonlinear coupled physics, high parametric or geometric complexity, or scarce observation data.

1. Embedding Governing Physics into Neural Networks

In the PINN paradigm, a neural network (with parameters θ) is constructed as a function approximator for the primary solution fields—such as pressure p(x,t)p(x,t) for a nonlinear diffusivity equation, or both displacement u(x,y,t)u(x,y,t) and p(x,y,t)p(x,y,t) for Biot’s poroelasticity. The core physics is enforced by computing the residual of the governing PDE(s) directly using automatic differentiation. For instance, for a nonlinear mass balance law:

ϕctpt[N[κ](p)]=g,\phi c_t \frac{\partial p}{\partial t} - \nabla\cdot[\mathcal{N}[\kappa](\nabla p)] = g,

where N[κ]\mathcal{N}[\kappa] may encode nonlinearities such as a pressure-dependent permeability (e.g., N[κ]=κ0p2\mathcal{N}[\kappa] = \kappa_0 p^2), the network outputs are differentiated as needed and the squared norm of the residual Π\Pi is constructed as a physics loss term. For coupled multiphysics, such as Biot's equations for poromechanics, the PINN simultaneously models the solid stress–strain constitutive relation and the coupled pressure equation, requiring the network output to satisfy

σ(u,p)f=0,\nabla\cdot\sigma(u, p) - f = 0,

alongside the appropriate mass balance for pp.

The total loss minimized during training is a composite of data loss (e.g., mean squared error at prescribed initial/boundary conditions or observation points) and the aggregated PDE residual loss from randomly sampled collocation points across the spatio-temporal domain. This yields an objective of the form:

MSE=MSEdata+MSEΠ,\text{MSE} = \text{MSE}_\text{data} + \text{MSE}_{\Pi},

where the relative weighting of terms is adjusted to balance data fidelity and physical consistency.

2. Forward and Inverse Problem Formulations

Forward PINN: The standard application involves mapping spatiotemporal coordinates to the predicted field(s) by minimizing the loss with respect to network weights, assuming all physical parameters are known. The inclusion of PDE residual loss (physics term) greatly accelerates convergence and improves generalization, especially for nonlinear and strongly coupled systems.

Inverse PINN: For parameter identification, the unknown physical parameters θ\theta (e.g., porosity-thickness product ϕct\phi c_t, nonlinear permeability factor κ0\kappa_0, or Biot coefficients μl\mu_l, λl\lambda_l, α\alpha) are treated as additional trainable variables. The PINN is trained with available noisy interior measurements and the residuals for both solution fields and physical parameters guide the learning. This approach allows simultaneous estimation of solutions and parameters, though it introduces higher sensitivity and challenges arising from data sparsity, noise, and non-uniqueness.

Field(s) Solved Parameters Known Loss Terms
Forward pp or (u,v,p)(u,v,p) All IC/BC loss, PDE residual
Inverse pp or (u,v,p)(u,v,p) Some unknown (θ\theta) Data mismatch, PDE residual

Key differences are that inverse problems require updating both field and physical parameter variables, with increased sensitivity to data placement and quantity.

3. Training, Hyperparameters, and Optimization

Data Dependence: The accuracy of PINNs is highly dependent on the number and distribution of training data (e.g., points for initial and boundary conditions), and on the number of residual collocation points for the physics loss. Systematic studies in the paper demonstrate that increasing the number of boundary/initial data points (NbN_b) and collocation points (NΠN_\Pi) reduces the relative L2\mathcal{L}^2 error. Inclusion of the physics residual term provides a dramatic improvement—typical error reductions are from 5.6×1025.6 \times 10^{-2} to below 1.0×1021.0 \times 10^{-2} for the nonlinear diffusivity equation with 96 training examples.

Architecture and Hyperparameter Sensitivity: Optimal network size (number of layers NhlN_{hl}, neurons per layer NnN_n) is problem dependent, and shows a trade-off between underfitting (too shallow or narrow) and overfitting (too deep or wide). For the nonlinear diffusivity equation, an optimal architecture identified is Nhl=6N_{hl} = 6, Nn=5N_n = 5; for Biot’s equations, Nhl=6N_{hl}=6, Nn=20N_n=20. These architectures achieved L2\mathcal{L}^2 errors on the order of 10310^{-3} or lower.

Optimization: Gradient-based optimizers are employed—quasi-Newton (L-BFGS) exhibits strong convergence properties for forward problems, while a two-step ADAM + L-BFGS strategy (ADAM for initial epochs, L-BFGS for refinement) yields both robust convergence and accuracy for inverse modeling.

4. Stochastic Effects, Reliability, and Noise Handling

Initialization and Stochastic Variations: Different randomizations in network initialization and collocation points yield variability in results. Reporting of standard deviations across multiple training realizations is essential for robust error assessment. Particularly in inverse problems, parameter recovery can show high variance and the practice of averaging over multiple trained models is recommended for reliable inference.

Noisy Measurements: The paper systematically introduces controlled Gaussian noise into measurement data and observes that parameter estimation error increases with noise level ϵ\epsilon. However, increasing the number of training examples mitigates this noise effect: with moderate noise (5%\leq 5\%) and sufficient data, parameter errors remain below roughly 15%15\%, although with larger uncertainty bands across realizations.

5. Hyperparameter Selection and Transferability

Sensitivity analysis reveals that optimal hyperparameter configurations for the forward solution problem are also effective for the inverse case—a result important for workflow efficiency. Once a network architecture is found that resolves the PDE solution landscape accurately, the same can be deployed for parameter identification. Initialization via transfer learning can be leveraged to enhance convergence, especially for complex multiphysics settings, and a combination of optimizers is beneficial (ADAM for warm-up, L-BFGS for fine tuning).

6. Applications and Real-World Impact

The PINN framework for nonlinear multiphysics problems demonstrated in this paper is directly applicable to:

  • Biomedical Engineering: Fluid-structure interactions in porous biological tissues (e.g., hydrogel swelling, tissue perfusion).
  • Seismology and Earthquake Modeling: Coupled pressure and deformation in fault zones for improved seismic prediction.
  • Energy Harvesting: Coupled fluid-solid processes in geothermal, oil and gas recovery, subsurface storage.

The ability of PINNs to handle strong nonlinearity, high-dimensional parameter spaces, and sparse/noisy data places them as a viable mesh-free alternative to finite element, finite difference, or volume methods, especially where rapid deployment or mesh generation is prohibitive.

7. Practical Guidelines and Limitations

Advances outlined in this work contribute to a better understanding of the trade-offs in PINN deployment:

  • Data vs. Physics Loss: Integration of PDE residuals into the loss is crucial for error reduction and physical fidelity.
  • Network Architecture: Careful tuning, guided by problem complexity and preliminary sensitivity studies, is necessary to avoid under- or overfitting.
  • Parameter Recovery: In inverse problems, large variance is observed; averaging and robust error metrics are advised.
  • Noise Robustness: Parameter recovery can remain effective with reasonable noise and adequate data; however, heavy noise requires substantially more data or averaging.
  • Computational Requirements: PINN training is relatively costly, with inverse problems particularly demanding. Performance can be improved by optimizer selection and possible use of transfer learning for network initialization.
  • Reproducibility: For critical parameter estimation, reporting distributions (mean and standard deviation) over multiple training instances is recommended.

In summary, the integration of physics constraints in neural networks for nonlinear diffusivity and Biot’s equations enables accurate, mesh-free solution of forward and inverse multiphysics problems, with systematic treatment of hyperparameter selection, data/noise dependence, and training variability forming the foundation for robust practical application in complex scientific and engineering domains (Kadeethum et al., 2020).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Physics-Informed Neural Networks.