Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain decomposition for data-driven reduced modeling of large-scale systems (2311.00883v2)

Published 1 Nov 2023 in math.NA, cs.CE, and cs.NA

Abstract: This paper focuses on the construction of accurate and predictive data-driven reduced models of large-scale numerical simulations with complex dynamics and sparse training datasets. In these settings, standard, single-domain approaches may be too inaccurate or may overfit and hence generalize poorly. Moreover, processing large-scale datasets typically requires significant memory and computing resources which can render single-domain approaches computationally prohibitive. To address these challenges, we introduce a domain decomposition formulation into the construction of a data-driven reduced model. In doing so, the basis functions used in the reduced model approximation become localized in space, which can increase the accuracy of the domain-decomposed approximation of the complex dynamics. The decomposition furthermore reduces the memory and computing requirements to process the underlying large-scale training dataset. We demonstrate the effectiveness and scalability of our approach in a large-scale three-dimensional unsteady rotating detonation rocket engine simulation scenario with over $75$ million degrees of freedom and a sparse training dataset. Our results show that compared to the single-domain approach, the domain-decomposed version reduces both the training and prediction errors for pressure by up to $13 \%$ and up to $5\%$ for other key quantities, such as temperature, and fuel and oxidizer mass fractions. Lastly, our approach decreases the memory requirements for processing by almost a factor of four, which in turn reduces the computing requirements as well.

Citations (1)

Summary

We haven't generated a summary for this paper yet.