Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 64 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 68 tok/s Pro
Kimi K2 212 tok/s Pro
GPT OSS 120B 454 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Koopman Theory and Linear Approximation Spaces (1811.10809v1)

Published 27 Nov 2018 in math.DS and math.NA

Abstract: Koopman theory studies dynamical systems in terms of operator theoretic properties of the Perron-Frobenius operator $\mathcal{P}$ and Koopman operator $\mathcal{U}$ respectively. In this paper, we derive the rates of convergence of approximations of $\mathcal{P}$ or $\mathcal{U}$ that are generated by finite dimensional bases like wavelets, multiwavelets, and eigenfunctions, as well as approaches that use samples of the input and output of the system in conjunction with these bases. We introduce a general class of priors that describe the information available for constructing such approximations and facilitate the error estimates in many applications of interest. These priors are defined in terms of the action of $\mathcal{P}$ or $\mathcal{U}$ on certain linear approximation spaces. The rates of convergence for the estimates of these operators are investigated under a variety of situations that are motivated from associated assumptions in practical applications. When the estimates of these operators are generated by samples, it is shown that the error in approximation of the Perron-Frobenius or Koopman operators can be decomposed into two parts, the approximation error and the sampling error. This result emphasizes that sample-based estimates of Perron-Frobenius and Koopman operators are subject to the well-known trade-off between the bias and variance that contribute to the error, a balance that also features in nonlinear regression and statistical learning theory.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube