Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 36 tok/s
GPT-5 High 40 tok/s Pro
GPT-4o 99 tok/s
GPT OSS 120B 461 tok/s Pro
Kimi K2 191 tok/s Pro
2000 character limit reached

Koopman-Invariant Subspaces

Updated 22 August 2025
  • Koopman-invariant subspaces are finite-dimensional sets of observables that remain closed under the Koopman operator, enabling linear representations of nonlinear system dynamics.
  • Data-driven techniques such as EDMD and SABON utilize these subspaces to extract modal decompositions and improve predictions in complex, high-dimensional systems.
  • Recent advancements focus on machine-learned invariant bases that enhance robustness, control, and model reduction in applications ranging from chaotic flows to networked dynamical systems.

Koopman-invariant subspaces are finite-dimensional subspaces of observable functions that are closed under the action of the Koopman operator associated with a dynamical system. By restricting attention to such subspaces, one obtains a linear (typically matrix) representation of the system’s (potentially nonlinear) evolution, enabling spectral analysis, prediction, and control via linear methods. The explicit construction and learning of these subspaces—whether analytically or through machine learning techniques—remains central to operator-theoretic approaches to understanding nonlinear systems.

1. Mathematical Definition and Foundational Principles

Given a (possibly nonlinear) discrete- or continuous-time dynamical system xt+1=f(xt)x_{t+1} = f(x_t) or x˙=f(x)\dot{x} = f(x), the Koopman operator K\mathcal{K} acts on functions (observables) of the state via Kg(x)=g(f(x))\mathcal{K} g(x) = g(f(x)). A finite-dimensional subspace Y=span{y1,,ym}\mathcal{Y} = \text{span}\{y_1, \ldots, y_m\} is said to be Koopman-invariant if, for any gYg \in \mathcal{Y}, Kg\mathcal{K} g also lies in Y\mathcal{Y}.

This property allows the restriction of the infinite-dimensional Koopman operator to a finite-dimensional linear operator AA such that g(f(x))=Ag(x)g(f(x)) = A g(x) for all xx. The concept generalizes the notion of invariant subspaces from linear algebra (Ide et al., 2012), where the enumeration and structure of TT-invariant subspaces for linear operators TT on Rn\mathbb{R}^n can be precisely characterized using Jordan canonical form, partitions, and multipartitions.

In Koopman theory, the function space setting renders the identification and construction of invariant subspaces nontrivial, because the closure under K\mathcal{K} depends on both the system dynamics and the choice of observables.

2. Analytical Construction and Enumeration in Linear and Nonlinear Settings

In finite-dimensional linear algebra, the process of enumerating invariant subspaces proceeds by bringing the operator into Jordan canonical form (Ide et al., 2012). Each Jordan block of size kk produces a nested chain of k+1k+1 invariant subspaces. For operators on Rn\mathbb{R}^n built from standard and real Jordan blocks, the total number of invariant subspaces mm is given by

m=i=1(di+1)m = \prod_{i=1}^{\ell} (d_i + 1)

where did_i are the sizes of the blocks and \ell is the total number of blocks (see Section 2E in (Ide et al., 2012)). Combinatorial methods involving partitions and multipartitions encode block size arrangements, capturing all possible invariance counts. Extensions to vector spaces over arbitrary fields use “generalized Jordan blocks” tied to irreducible polynomials in F[x]F[x], but the enumeration remains governed by the same principles.

For nonlinear systems, explicit analytic closures of invariant subspaces are rare and special. When polynomial observables close the dynamics (e.g., near a single fixed point, or for slow manifolds expressible in terms of state polynomials), finite-dimensional Koopman-invariant subspaces can be constructed: for example, with observables {x1,x2,x1N}\{x_1, x_2, x_1^N\} closing under differentiation or forward iteration (Brunton et al., 2015). This setting admits exact finite-dimensional linear representations only for restricted system classes (notably those with a single fixed point).

3. Data-Driven Learning and Machine-Learned Basis Construction

Recent work centers on learning Koopman-invariant subspaces from data using techniques from regression, neural networks, and sparse optimization. Extended Dynamic Mode Decomposition (EDMD) and its variants approximate the Koopman operator on the span of a dictionary of candidate observables (Haseli et al., 2019). EDMD finds a matrix KK best fitting g(f(x))=Kg(x)g(f(x)) = K g(x) across data samples. If the span of the dictionary is not itself invariant, the approximation is only locally valid. Algorithms such as Symmetric Subspace Decomposition (SSD) iteratively prune the dictionary to isolate the maximal invariant subspace—that is, a set of functions D(x)CD(x)C (for reduction matrix CC) satisfying R(D(Y)C)=R(D(X)C)R(D(Y)C) = R(D(X)C) across all data (Haseli et al., 2019, Haseli et al., 2020).

Recent developments employ neural networks to learn adaptively tailored dictionaries or basis functions. The Single Autoencoder Basis Operator Network (SABON) directly trains an encoder/decoder pair to produce orthonormal, locally supported basis functions {ϕj}\{\phi_j\} that are nearly invariant under the action of the Koopman or transfer operator L\mathcal{L} (Froyland et al., 8 May 2025). The composite loss incorporates reconstruction error of both the observable and its transform, as well as a sparsity penalty to enforce locality. This data-driven approach yields basis functions that adapt to system-specific anisotropies and structure, outperforming classical isotropic choices (such as Fourier basis) in high-dimensional or chaotic systems.

Hybrid strategies, such as FlowDMD (Meng et al., 2023), leverage invertible neural network architectures (coupling flow-based INNs) so that the observable transformation is bijective, enabling accurate back-projection from invariant subspace coordinates to the original state, a property essential for prediction and control applications.

Once a Koopman-invariant subspace is identified, its spectral properties provide deep insight into the system’s behavior. The restriction of K\mathcal{K} to the subspace is represented by a finite-dimensional Galerkin matrix LL, with eigenpairs (λ,v)(\lambda, v) corresponding to Koopman eigenvalues and modes: Lv=λv,ψ(x)=jvjϕj(x)L v = \lambda v, \quad \psi(x) = \sum_j v_j \phi_j(x) where ψ(x)\psi(x) is the Koopman eigenfunction associated with λ\lambda (Froyland et al., 8 May 2025, Brunton et al., 2015). Spectral analysis of these operators enables uncovering system timescales, coherence structures, and metastable behavior. For example, in circle rotation systems, learned eigenvalues and eigenfunctions match the analytic forms precisely, demonstrating that machine-learned invariant subspaces preserve the spectral feature set when constructed appropriately.

The persistence of spectral properties under nearly invariant subspace approximation (where the subspace is only approximately closed under K\mathcal{K}) is vital. Local support and orthogonality promote numerical robustness and stability in Galerkin projections.

5. Practical Implications and Applications

Koopman-invariant subspaces serve as the foundation for numerous data-driven methods in dynamical systems, particularly dynamic mode decomposition (DMD) and its extensions. High-fidelity invariant subspaces enable:

  • Accurate long-term prediction using linear evolution of projected coordinates.
  • Modal analysis for coherent structure identification in high-dimensional flows and chaotic systems.
  • Optimal control design, including pole-placement and LQR, via linear representations within invariant subspaces (Iwata et al., 2022, Brunton et al., 2015).
  • Model reduction: informative invariant subspaces serve as basis for reduced-order modeling, yielding compact and interpretable representations.
  • Robustness in operator approximation: locally supported, orthonormal bases adapt to the geometry of the underlying dynamics, improving projection error performance and sample efficiency (Froyland et al., 8 May 2025).

Recent advances show that the dynamic adaptation of machine-learned subspaces significantly outperforms static, hand-tuned dictionaries, especially in anisotropic or strongly nonlinear settings. Robust parallel and distributed algorithms for subspace learning enhance scalability for large-scale networked systems (Haseli et al., 2020).

6. Challenges, Limitations, and Future Directions

A central challenge is the selection or construction of observables that yield a Koopman-invariant subspace. Nonlinear systems with multiple attractors rarely admit finite-dimensional invariant subspaces including the state itself (Brunton et al., 2015, Page et al., 2018), requiring either local expansions around simple invariant solutions or approximate closure techniques. Learning subspaces directly from data, rather than designing them a priori, has become standard, but issues remain:

  • Ensuring robustness to noise in data and preserving signal-to-noise ratios in subspace identification (Haseli et al., 2019).
  • Developing scalable streaming or distributed algorithms (e.g., SSSD and P-SSD) for large-scale or networked systems.
  • Clarifying the structure and impact of approximate closure; uniform finite approximate closure as in SILL-based and heterogeneous dictionaries provides analytic error bounds and interpretable models (Johnson et al., 2022, Johnson et al., 2022).
  • Balancing interpretability, parameter complexity, and model generality—hybrid dictionaries (e.g., logistic + radial basis functions) yield accuracy with reduced parameter counts compared to deep learning approaches.
  • Extending methodologies to noncommutative or quantum dynamical systems, where invariant subspaces are analyzed via analogues of Beurling’s theorem in H2H^2 spaces of von Neumann algebras (Labuschagne, 2016).

Future directions entail direct latent space spectral learning, adaptive network architectures, and further exploration of anisotropic base representation for chaotic and high-dimensional systems (Froyland et al., 8 May 2025).

The principles underlying Koopman-invariant subspaces extend to related areas in operator theory. Invariant subspace theorems (e.g., Beurling–Lax) and their generalizations to noncommutative H2H^2 spaces provide classification and decomposition strategies relevant to Koopman analysis (Labuschagne, 2016, Das et al., 24 Jul 2024). The paper of nearly or almost invariant subspaces under finite-rank perturbations yields refined understanding of stability and coherence under model misspecification, perturbation, or noise (Das et al., 24 Jul 2024).

These analytical frameworks interface directly with spectral theory, ergodic theory, prediction strategies, and quantum dynamical system analysis; they serve as a mathematical backbone for the interpretation and extension of Koopman operator techniques in both pure and applied contexts.


Koopman-invariant subspaces thus constitute a foundational and highly active area of research at the intersection of dynamical systems, linear operator theory, data-driven modeling, and machine learning. Their explicit characterization, algorithmic construction, and practical adaptation remain crucial for advancing the spectral and predictive understanding of nonlinear, high-dimensional, and networked dynamical systems.