Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems (1809.08327v1)

Published 21 Sep 2018 in math.AP, physics.comp-ph, and stat.ML

Abstract: Physics-informed neural networks (PINNs) have recently emerged as an alternative way of solving partial differential equations (PDEs) without the need of building elaborate grids, instead, using a straightforward implementation. In particular, in addition to the deep neural network (DNN) for the solution, a second DNN is considered that represents the residual of the PDE. The residual is then combined with the mismatch in the given data of the solution in order to formulate the loss function. This framework is effective but is lacking uncertainty quantification of the solution due to the inherent randomness in the data or due to the approximation limitations of the DNN architecture. Here, we propose a new method with the objective of endowing the DNN with uncertainty quantification for both sources of uncertainty, i.e., the parametric uncertainty and the approximation uncertainty. We first account for the parametric uncertainty when the parameter in the differential equation is represented as a stochastic process. Multiple DNNs are designed to learn the modal functions of the arbitrary polynomial chaos (aPC) expansion of its solution by using stochastic data from sparse sensors. We can then make predictions from new sensor measurements very efficiently with the trained DNNs. Moreover, we employ dropout to correct the over-fitting and also to quantify the uncertainty of DNNs in approximating the modal functions. We then design an active learning strategy based on the dropout uncertainty to place new sensors in the domain to improve the predictions of DNNs. Several numerical tests are conducted for both the forward and the inverse problems to quantify the effectiveness of PINNs combined with uncertainty quantification. This NN-aPC new paradigm of physics-informed deep learning with uncertainty quantification can be readily applied to other types of stochastic PDEs in multi-dimensions.

Overview of Quantifying Uncertainty in Physics-Informed Neural Networks

The paper explores a novel approach to uncertainty quantification in physics-informed neural networks (PINNs), addressing both forward and inverse stochastic problems for partial differential equations (PDEs). Traditional PINNs provide an efficient framework for solving deterministic PDEs; however, they often lack specific mechanisms to assess uncertainty arising from stochastic inputs and the approximative nature of neural network models. This paper aims to fill that gap by introducing methods to quantify two main types of uncertainties: parametric uncertainty and approximation uncertainty.

Methodological Approach

The researchers utilize the arbitrary polynomial chaos (aPC) expansion to express solutions of stochastic PDEs, enabling the representation of parametric uncertainty. By employing multiple deep neural networks (DNNs), the methodology efficiently learns the modal functions of the aPC expansion from stochastic data gathered by sparse sensors. This setup allows the DNNs to predict new outputs by referring back to the stochastic models informed by initial sensor inputs.

In their approach, the authors also incorporate dropout as a means to estimate approximation uncertainty within DNNs. Dropout, generally applied for mitigating overfitting in machine learning models, serves a dual purpose here by acting as a bayesian approximation and providing quantifiable uncertainty estimates for the model predictions. Additionally, active learning is integrated into the framework, where dropout uncertainty aids in strategically placing new sensors within the domain, thus improving the model's predictive capabilities.

Numerical Experiments and Results

Several numerical experiments affirm the proposed methodology's capabilities in accurately solving forward and inverse stochastic PDEs. The use of dropout in reducing overfitting and quantifying approximation uncertainty demonstrates promising results, with their experimental implementation showcasing reduced prediction error by intelligently using active learning to navigate sensor placement. The method exhibits scalability and generalizability as it proficiently adapts to multi-dimensional stochastic PDEs.

Experimental comparisons between different aPC expansion orders show improved accuracy in both mean and standard deviation predictions with higher-order expansions. This highlights the potential for further exploration into more complex stochastic phenomena and higher dimensional problems.

Implications and Future Directions

The introduced framework substantially enhances the capabilities of PINNs to not just solve PDEs more robustly but also offer comprehensive uncertainty quantification. This versatility is particularly beneficial for domains relying heavily on stochastic modeling, such as geophysics and materials science, where understanding uncertainties is crucial for improving model reliability and decision-making.

Future directions may involve enhancing this PINN-based framework's scalability and efficiency, particularly by integrating advanced techniques like generative adversarial networks (GANs) to deal with higher-dimensional stochastic spaces. Further exploration could also focus on systematic approaches in active learning to refine the sensor placement strategy, ensuring maximal information gain vis-a-vis computational expense.

In summary, the paper presents a sophisticated and comprehensive approach to incorporating uncertainty quantification within the context of PINNs, opening avenues for further research and practical applications in complex stochastic systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Dongkun Zhang (12 papers)
  2. Lu Lu (189 papers)
  3. Ling Guo (24 papers)
  4. George Em Karniadakis (216 papers)
Citations (373)