Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep UQ: Learning deep neural network surrogate models for high dimensional uncertainty quantification (1802.00850v1)

Published 2 Feb 2018 in physics.comp-ph, cs.LG, and stat.ML

Abstract: State-of-the-art computer codes for simulating real physical systems are often characterized by a vast number of input parameters. Performing uncertainty quantification (UQ) tasks with Monte Carlo (MC) methods is almost always infeasible because of the need to perform hundreds of thousands or even millions of forward model evaluations in order to obtain convergent statistics. One, thus, tries to construct a cheap-to-evaluate surrogate model to replace the forward model solver. For systems with large numbers of input parameters, one has to deal with the curse of dimensionality - the exponential increase in the volume of the input space, as the number of parameters increases linearly. In this work, we demonstrate the use of deep neural networks (DNN) to construct surrogate models for numerical simulators. We parameterize the structure of the DNN in a manner that lends the DNN surrogate the interpretation of recovering a low dimensional nonlinear manifold. The model response is a parameterized nonlinear function of the low dimensional projections of the input. We think of this low dimensional manifold as a nonlinear generalization of the notion of the active subspace. Our approach is demonstrated with a problem on uncertainty propagation in a stochastic elliptic partial differential equation (SPDE) with uncertain diffusion coefficient. We deviate from traditional formulations of the SPDE problem by not imposing a specific covariance structure on the random diffusion coefficient. Instead, we attempt to solve a more challenging problem of learning a map between an arbitrary snapshot of the diffusion field and the response.

Citations (384)

Summary

  • The paper introduces a DNN surrogate framework that maps high-dimensional inputs onto a low-dimensional latent space to efficiently address uncertainty quantification challenges.
  • It employs an encoder-decoder architecture combined with a novel hyperparameter tuning strategy using grid search and Bayesian global optimization for robust performance.
  • Numerical experiments on stochastic elliptic PDEs demonstrate the model’s accuracy and flexibility, highlighting its potential for scalable uncertainty analysis.

Deep UQ: Learning Deep Neural Network Surrogate Models for High Dimensional Uncertainty Quantification

The paper "Deep UQ: Learning deep neural network surrogate models for high dimensional uncertainty quantification" investigates the application of deep neural networks (DNNs) to construct surrogate models for uncertainty quantification (UQ) tasks. This research primarily addresses the computational challenges posed by high-dimensional input spaces in numerical simulation-based uncertainty analysis, specifically touching on the curse of dimensionality that arises when dealing with large numbers of uncertain parameters.

Background and Methodology

Traditional methods for UQ, such as Monte Carlo (MC) simulation, become computationally infeasible when the number of input parameters is large. Surrogate models are employed to tackle this, but commonly used techniques like Gaussian processes, polynomial chaos expansions, and radial basis functions encounter limitations due to scalability issues in high-dimensional spaces.

The approach detailed in this paper involves leveraging DNNs to construct surrogate models that effectively capture the complex relationships inherent in high-dimensional input spaces. The authors emphasize the DNN's capacity to model these relationships as low-dimensional nonlinear manifolds, extending the traditional concept of an active subspace into a broader, nonlinear context.

The Proposed Framework

The paper presents a structured framework for employing deep neural networks for surrogate modeling in UQ tasks. The core of the framework is the use of DNNs to parameterize the surrogate model, which captures a nonlinear mapping from high-dimensional input spaces to model outputs. Key components of the methodology include:

  • Network Architecture: The DNN surrogate is decomposed into an encoder, which projects inputs to a lower-dimensional latent space, and a decoder, which predicts outputs from this latent representation. The encoder-decoder structure is guided by principles similar to those found in autoencoder networks.
  • Hyperparameter Optimization: The authors implement a novel model selection strategy that combines grid search with Bayesian global optimization (BGO) to tune hyperparameters, such as the number of hidden layers and the regularization constant, ensuring optimal model performance under the constraints of limited data.

Application and Evaluation

The framework is evaluated on the problem of uncertainty propagation in a stochastic elliptic partial differential equation (SPDE) with an uncertain diffusion coefficient. Notably, this problem extends beyond typical UQ studies by not imposing a predefined covariance structure on the diffusion field, thus presenting a more challenging task for surrogate construction.

Numerical experiments demonstrate that the proposed DNN surrogate can predict the PDE solution accurately, even at unobserved spatial scales during training. This result highlights the surrogate model's flexibility and potential robustness in handling diverse input scenarios.

Implications and Future Work

The findings underscore the potential of DNNs to overcome traditional difficulties associated with high-dimensional UQ tasks. By providing a scalable surrogate modeling approach, this research paves the way for more efficient uncertainty analysis in computationally expensive simulations. The theoretical contributions also suggest opportunities for extending this framework to Bayesian neural networks, which would facilitate quantitative uncertainty assessments in DNN predictions through posterior distributions over network weights.

Moreover, future advancements could explore integrating multifidelity and multilevel model strategies, potentially enriching UQ studies by leveraging information across varied fidelity levels in simulations or experimental setups. In conclusion, the methodologies proposed in this paper reveal promising directions for enhancing the capabilities of uncertainty quantification through the use of advanced deep learning techniques.