Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders (1812.08373v3)

Published 20 Dec 2018 in cs.NA

Abstract: Nearly all model-reduction techniques project the governing equations onto a linear subspace of the original state space. Such subspaces are typically computed using methods such as balanced truncation, rational interpolation, the reduced-basis method, and (balanced) POD. Unfortunately, restricting the state to evolve in a linear subspace imposes a fundamental limitation to the accuracy of the resulting reduced-order model (ROM). In particular, linear-subspace ROMs can be expected to produce low-dimensional models with high accuracy only if the problem admits a fast decaying Kolmogorov $n$-width (e.g., diffusion-dominated problems). Unfortunately, many problems of interest exhibit a slowly decaying Kolmogorov $n$-width (e.g., advection-dominated problems). To address this, we propose a novel framework for projecting dynamical systems onto nonlinear manifolds using minimum-residual formulations at the time-continuous and time-discrete levels; the former leads to manifold Galerkin projection, while the latter leads to manifold least-squares Petrov--Galerkin (LSPG) projection. We perform analyses that provide insight into the relationship between these proposed approaches and classical linear-subspace reduced-order models; we also derive a posteriori discrete-time error bounds for the proposed approaches. In addition, we propose a computationally practical approach for computing the nonlinear manifold, which is based on convolutional autoencoders from deep learning. Finally, we demonstrate the ability of the method to significantly outperform even the optimal linear-subspace ROM on benchmark advection-dominated problems, thereby demonstrating the method's ability to overcome the intrinsic $n$-width limitations of linear subspaces.

Citations (604)

Summary

  • The paper introduces a framework that projects dynamical systems onto nonlinear manifolds using deep convolutional autoencoders.
  • It develops two projection methods—Galerkin and LSPG—to reduce model dimensions while maintaining accuracy in challenging regimes.
  • Numerical experiments show that the nonlinear approach outperforms traditional linear-subspace ROMs in efficiency and precision.

Model Reduction of Dynamical Systems on Nonlinear Manifolds Using Deep Convolutional Autoencoders

The paper explores a novel framework for model reduction of dynamical systems by projecting these systems onto nonlinear manifolds, as opposed to traditional linear subspaces. The approach introduces minimum-residual formulations at both time-continuous and time-discrete levels, resulting in \GalerkinName\ and \LSPGNameLong\ projection methods. The fundamental aim is to mitigate the limitations of linear-subspace reduced-order models (ROMs), especially their inaccuracy in problems characterized by slowly decaying Kolmogorov nn-width, such as advection-dominated scenarios.

Key Contributions

  1. Nonlinear Manifold Projection:
    • The work presents methods to project system models onto arbitrary continuously-differentiable nonlinear manifolds using deep convolutional autoencoders. This approach leverages the expressiveness and scalability of deep learning models to compute the nonlinear manifold from snapshot data alone, without needing detailed knowledge about advection phenomena.
  2. Projection Techniques:
    • Two main projection techniques are derived: \GalerkinName\ projection, which orthogonally projects the velocity onto the tangent space of the trial manifold, and \LSPGName\ projection, which minimizes the time-discrete residual.
  3. Error Analysis and Comparisons:
    • The paper provides analyses comparing these novel approaches to classical linear-subspace methods. It further establishes conditions for the commutativity of time discretization and \GalerkinName\ projection, as well as conditions for the equivalence of \GalerkinName\ and \LSPGName\ under certain circumstances.
  4. Numerical Experiments:
    • The proposed methods are tested on benchmark advection-dominated problems, including 1D Burgers' equation and a chemically reacting flow. The results illustrate that the nonlinear manifold approach significantly outperforms linear-subspace ROMs, often achieving superior accuracy with substantially lower-dimensional models.
  5. Autoencoder Training:
    • The approach utilizes deep convolutional autoencoders, applying modern training techniques such as stochastic gradient descent with minibatching and early stopping. The autoencoder architecture is tailored specifically for spatially distributed states and leverages convolutional layers for efficient feature extraction.

Implications and Future Directions

The implications of this work are twofold. Practically, the proposed framework opens new avenues for efficiently simulating complex dynamical systems where traditional ROMs fail to provide adequate solutions. Theoretically, this research extends the landscape of model reduction techniques by embedding deep learning directly into the core of dynamical systems simulation.

Future research could focus on integrating hyper-reduction techniques to further decrease computational costs and exploring structure-preserving constraints to enhance model fidelity. Additionally, the methodology could be extended to incorporate real-time applications where rapid simulations are imperative.

The introduction of deep autoencoders into the ROM framework is a significant development, providing solutions that are more adaptable and capable of handling the intricacies associated with nonlinear dynamical systems. This work nudges the field toward a synergy between machine learning and classical model reduction, promising exciting advancements in computational efficiency and model accuracy.