Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 152 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 119 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 425 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Emulation of CPU-demanding reactive transport models: comparison of Gaussian processes, polynomial chaos expansion and deep neural networks (1809.07305v2)

Published 19 Sep 2018 in physics.comp-ph and physics.geo-ph

Abstract: This paper presents a detailed comparison between 3 methods for emulating CPU-intensive reactive transport models (RTMs): Gaussian processes (GPs), polynomial chaos expansion (PCE) and deep neural networks (DNNs). Besides direct emulation of the simulated uranium concentration time series, replacing the original RTM by its emulator is also investigated for global sensitivity analysis (GSA), uncertainty propagation (UP) and probabilistic calibration using Markov chain Monte Carlo (MCMC) sampling. The selected DNN is found to be superior to both GPs and PCE in reproducing the input - output behavior of the considered 8-dimensional and 13-dimensional CPU-intensive RTMs. Furthermore, the two used PCE variants: standard PCE and sparse PCE (sPCE) appear to always provide the least accuracy while not differing much in performance. As a consequence of its better emulation capabilities, the used DNN outperforms the two other methods for UP. In addition, DNNs and GPs offer equally good approximations to the true first-order and total-order Sobol sensitivity indices while PCE does somewhat less well. Most surprisingly, despite its superior emulation skills the DNN approach leads to the worst solution of the considered synthetic inverse problem which involves 1224 measurement data with low noise. This apparently contradicting behavior is at least partially due to the small but complicated deterministic noise that affects the DNN-based predictions. Indeed, this complex error structure can drive the emulated solutions far away from the true posterior distribution. Overall, our findings indicate that when the available training set is relatively small (75 - to 500 input - output examples) and fixed beforehand, DNNs can well emulate RTMs but are not suited to emulation-based inversion. In contrast, GPs perform fairly well across all considered tasks: direct emulation, GSA, UP, and inversion.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.