Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Machine Learning Approach to Model Order Reduction of Nonlinear Systems via Autoencoder and LSTM Networks (2109.11213v1)

Published 23 Sep 2021 in cs.CE

Abstract: In analyzing and assessing the condition of dynamical systems, it is necessary to account for nonlinearity. Recent advances in computation have rendered previously computationally infeasible analyses readily executable on common computer hardware. However, in certain use cases, such as uncertainty quantification or high precision real-time simulation, the computational cost remains a challenge. This necessitates the adoption of reduced-order modelling methods, which can reduce the computational toll of such nonlinear analyses. In this work, we propose a reduction scheme relying on the exploitation of an autoencoder as means to infer a latent space from output-only response data. This latent space, which in essence approximates the system's nonlinear normal modes (NNMs), serves as an invertible reduction basis for the nonlinear system. The proposed machine learning framework is then complemented via the use of long short term memory (LSTM) networks in the reduced space. These are used for creating an nonlinear reduced-order model (ROM) of the system, able to recreate the full system's dynamic response under a known driving input.

Citations (32)

Summary

We haven't generated a summary for this paper yet.