Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A multi-stage deep learning based algorithm for multiscale modelreduction (2009.11341v1)

Published 23 Sep 2020 in math.NA and cs.NA

Abstract: In this work, we propose a multi-stage training strategy for the development of deep learning algorithms applied to problems with multiscale features. Each stage of the pro-posed strategy shares an (almost) identical network structure and predicts the same reduced order model of the multiscale problem. The output of the previous stage will be combined with an intermediate layer for the current stage. We numerically show that using different reduced order models as inputs of each stage can improve the training and we propose several ways of adding different information into the systems. These methods include mathematical multiscale model reductions and network approaches; but we found that the mathematical approach is a systematical way of decoupling information and gives the best result. We finally verified our training methodology on a time dependent nonlinear problem and a steady state model

Citations (18)

Summary

We haven't generated a summary for this paper yet.