Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Volume-Preserving Transformers for Learning Time Series Data with Structure (2312.11166v4)

Published 18 Dec 2023 in math.NA, cs.LG, and cs.NA

Abstract: Two of the many trends in neural network research of the past few years have been (i) the learning of dynamical systems, especially with recurrent neural networks such as long short-term memory networks (LSTMs) and (ii) the introduction of transformer neural networks for NLP tasks. While some work has been performed on the intersection of these two trends, those efforts were largely limited to using the vanilla transformer directly without adjusting its architecture for the setting of a physical system. In this work we develop a transformer-inspired neural network and use it to learn a dynamical system. We (for the first time) change the activation function of the attention layer to imbue the transformer with structure-preserving properties to improve long-term stability. This is shown to be of great advantage when applying the neural network to learning the trajectory of a rigid body.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Benedikt Brantner (3 papers)
  2. Guillaume de Romemont (1 paper)
  3. Michael Kraus (23 papers)
  4. Zeyuan Li (6 papers)

Summary

We haven't generated a summary for this paper yet.