Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An efficient epistemic uncertainty quantification algorithm for a class of stochastic models: A post-processing and domain decomposition framework (2010.07863v1)

Published 15 Oct 2020 in math.NA, cs.NA, math.AP, and math.PR

Abstract: Partial differential equations (PDEs) are fundamental for theoretically describing numerous physical processes that are based on some input fields in spatial configurations. Understanding the physical process, in general, requires computational modeling of the PDE. Uncertainty in the computational model manifests through lack of precise knowledge of the input field or configuration. Uncertainty quantification (UQ) in the output physical process is typically carried out by modeling the uncertainty using a random field, governed by an appropriate covariance function. This leads to solving high-dimensional stochastic counterparts of the PDE computational models. Such UQ-PDE models require a large number of simulations of the PDE in conjunction with samples in the high-dimensional probability space, with probability distribution associated with the covariance function. Those UQ computational models having explicit knowledge of the covariance function are known as aleatoric UQ (AUQ) models. The lack of such explicit knowledge leads to epistemic UQ (EUQ) models, which typically require solution of a large number of AUQ models. In this article, using a surrogate, post-processing, and domain decomposition framework with coarse stochastic solution adaptation, we develop an offline/online algorithm for efficiently simulating a class of EUQ-PDE models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Stuart C Hawkins (1 paper)
  2. Mahadevan Ganesh (4 papers)
  3. Alexandre Tartakovsky (19 papers)
  4. Ramakrishna Tipireddy (14 papers)

Summary

We haven't generated a summary for this paper yet.