Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model Reduction for Large Scale Systems (2105.01433v1)

Published 4 May 2021 in math.NA, cs.NA, and math.OC

Abstract: Projection based model order reduction has become a mature technique for simulation of large classes of parameterized systems. However, several challenges remain for problems where the solution manifold of the parameterized system cannot be well approximated by linear subspaces. While the online efficiency of these model reduction methods is very convincing for problems with a rapid decay of the Kolmogorov n-width, there are still major drawbacks and limitations. Most importantly, the construction of the reduced system in the offline phase is extremely CPU-time and memory consuming for large scale and multi scale systems. For practical applications, it is thus necessary to derive model reduction techniques that do not rely on a classical offline/online splitting but allow for more flexibility in the usage of computational resources. A promising approach with this respect is model reduction with adaptive enrichment. In this contribution we investigate Petrov-Galerkin based model reduction with adaptive basis enrichment within a Trust Region approach for the solution of multi scale and large scale PDE constrained parameter optimization.

Citations (4)

Summary

We haven't generated a summary for this paper yet.