Koopman Invariant Subspace Learning
- Koopman invariant subspace learning is the identification of finite- or low-dimensional subspaces that remain invariant under the Koopman operator, facilitating linearization of nonlinear systems.
- Methodologies such as SSD, Grassmannian optimization, and deep dictionary learning are used to construct these subspaces with quantifiable error bounds through invariance proximity.
- Applications include model reduction, system identification, and control, leveraging SVD-based computations and neural network frameworks for efficient dynamical system analysis.
Koopman invariant subspace learning is the theoretical and algorithmic endeavor of identifying finite- or low-dimensional subspaces of observables that are (exactly or approximately) invariant under the action of the Koopman operator for a dynamical system. The construction and characterization of such subspaces are central to leveraging the linearity of the Koopman operator in the analysis, prediction, and control of nonlinear dynamical systems. The field is distinguished by precise definitions of subspace invariance, algorithmic strategies for discovering invariant subspaces from data, error analysis grounded in functional analysis and geometry, and extensive applications in model reduction, system identification, and control.
1. Koopman Operator and Subspace Invariance
Let be the state space of a discrete or continuous dynamical system and a (possibly infinite-dimensional) inner-product space of observables (functions ). The (discrete-time) Koopman operator is defined by , where is the state-transition map. A subspace is Koopman-invariant if , i.e., for every , .
Finite-dimensional Koopman-invariant subspaces provide a setting in which the nonlinear dynamics, when observed through suitable nonlinear observables, evolve linearly in time. If has basis , then
with an matrix, and the evolution in the lifted coordinates is given by . This enables spectral analysis, stable approximation, prediction, and Koopman-based model reduction for otherwise nonlinear systems (Brunton et al., 2015, Haseli et al., 2019).
Approximate invariance is typically necessary in nonlinear and finite-dimensional settings. Subspace exhibits uniform finite approximate closure if the closure error
is uniformly small, where denotes orthogonal projection onto (Johnson et al., 2022, Haseli et al., 2023).
2. Quantifying and Certifying Invariance: Invariance Proximity
A central quantitative notion is the invariance proximity , providing a tight, worst-case upper bound on the relative one-step prediction error incurred by projecting the Koopman operator onto (Haseli et al., 2023, Haseli et al., 2023). Formally,
This leads to the error-bound property: and no tighter uniform bound holds.
A key theoretical advance is a closed-form formula for via principal angles between and its image . Let be the largest principal angle (Jordan angle) between and . Then
which reduces computation to standard singular value decomposition (SVD) routines via
with the minimal singular value of the matrix of inner products between orthonormal bases for and . This enables efficient certification and optimization of subspace invariance in practical computations (Haseli et al., 2023).
3. Data-Driven Algorithms for Invariant Subspace Discovery
3.1 Symmetric Subspace Decomposition (SSD)
SSD is an iterative algorithm guaranteeing (under mild conditions) convergence to the maximal Koopman-invariant subspace in the span of a user-supplied dictionary. The method tests for range correspondence between the dictionary at current and next time-steps (forward and backward), systematically pruning directions that fail the invariance symmetry condition. The result is a reduced dictionary with basis functions spanning the maximal invariant subspace represented in the data (Haseli et al., 2019, Haseli et al., 2020). Extensions include:
- Approximated-SSD, which relaxes to approximate invariance via truncated SVD.
- Streaming SSD (SSSD), supporting online data assimilation with fixed memory.
3.2 Grassmannian Optimization
Invariance proximity is minimized directly as a cost function on the Grassmann manifold. Optimization is over the set of subspaces of fixed dimension, using gradient or conjugate-gradient methods that leverage the geometry of the Grassmannian and efficient SVD-based computation of principial angles. This approach enables systematic refinement (or augmentation) of dictionaries towards maximally invariant subspaces (Schurig et al., 10 Nov 2025, Haseli et al., 2023).
3.3 Deep Learning and Dictionary Learning Frameworks
Neural networks can parameterize the dictionary and, jointly with the projected Koopman operator, can be trained using various loss functions enforcing approximate invariance. Notably, the Koopman auto-encoding loss combines prediction and reconstruction terms, and recent works incorporate multistep prediction consistency and information-theoretic regularization to mitigate overfitting and ensure expressivity (Gao et al., 16 Nov 2025, Cheng et al., 14 Oct 2025, Meng et al., 2023).
State-inclusive logistic lifting (SILL), augmented with conjunctive logistic and RBF-like functions, provides analytic control of approximate subspace invariance, and neural learning approaches such as deepDMD and FlowDMD are empirically observed to recover nearly invariant subspaces of minimal required dimension (Johnson et al., 2022, Johnson et al., 2022, Meng et al., 2023).
4. Error Bounds, Theoretical Guarantees, and Structural Limitations
Theoretical analysis centers on uniform finite approximate closure—guaranteeing that subspace projection of the Koopman generator yields arbitrarily small error as dictionary parameters (e.g., steepness in SILL, dictionary size) increase. For heterogeneous dictionaries (e.g., "augSILL"), closure errors decay exponentially in parameter and measurement dimensions, explaining the empirical success and efficiency of deep-learning-derived dictionaries (Johnson et al., 2022, Johnson et al., 2022).
Principal-angle based invariance proximity provides a tight, computable certificate for the worst-case relative error, but is conservative with respect to average prediction error. The analytic inclusion of the state variable in the finite-dimensional invariant subspace is only possible for systems topologically conjugate to a single fixed point, ruling out global finite-dimensional Koopman representations for systems with richer attractor structures (Brunton et al., 2015).
5. Algorithmic and Computational Considerations
Key computational elements include:
- SVD-based computation of principal angles for invariance proximity and subspace augmentation.
- Nullspace and range calculations for SSD and P-SSD (parallel distributed SSD) supporting scaling to large dictionaries and distributed datasets (Haseli et al., 2020).
- Learning routines combining batch and streaming data acquisition, regularized regression (elastic net or group-LASSO) to achieve parsimonious representations, and stochastic optimization (Adam, SGD) for deep learning settings.
Empirical studies demonstrate the impact of dictionary construction, size, and mixing on predictive accuracy, parameter efficiency, and interpretability, with deep or heterogeneous dictionaries providing state-of-the-art performance at a fraction of the complexity of traditional monomial or kernel approaches (Johnson et al., 2022, Schurig et al., 10 Nov 2025).
6. Applications, Extensions, and Current Research Directions
Koopman-invariant subspaces underpin a variety of operator-theoretic model reduction techniques, spectral analysis, forecasting, and nonlinear control design—including pole placement via Koopman lifting, with closed-loop eigenvalue assignment (Iwata et al., 2022, Guo et al., 2023). Extensions to systems with parameters, stochastic dynamics, and control inputs have led to architectures capable of learning parameter-indexed operator families and leveraging invariance proximity for consistent model quality guarantees (Guo et al., 2023, Haseli et al., 2023).
Recent advances incorporate information-theoretic controls on latent representation—balancing compression and expressiveness through mutual information and entropy-based regularization (Cheng et al., 14 Oct 2025), multistep prediction losses (Gao et al., 16 Nov 2025), and invertible neural architectures for simultaneous reconstruction and invariant basis discovery (Meng et al., 2023).
Open problems and active areas include:
- Quantitative sharpness of invariance proximity for multi-step, average-case, or system-specific error metrics.
- Scalable, efficient algorithms for very high-dimensional systems and data streams.
- Integration of symmetry, control, and heterogeneity in learned dictionaries.
- Provable continual learning (online adaptation) with finite regret and stability guarantees (Gao et al., 16 Nov 2025).
7. Summary Table: Core Methods for Koopman Invariant Subspace Learning
| Approach | Core Principle | Primary Reference(s) |
|---|---|---|
| SSD/SSSD/(P-)SSD | Nullspace/range symmetry in data-lifted space | (Haseli et al., 2019, Haseli et al., 2020) |
| Grassmann Optimization | Subspace selection via geometric cost (principal angle) | (Schurig et al., 10 Nov 2025, Haseli et al., 2023) |
| Deep Dictionary Learning | Joint NN-based learning of observables/invariance | (Cheng et al., 14 Oct 2025, Meng et al., 2023) |
| SILL/augSILL (analytic) | Parameterized homogeneous/heterogeneous closure | (Johnson et al., 2022, Johnson et al., 2022) |
| RKHS-based Kernel Regression | Operator regression in kernel feature space | (Kostic et al., 2022, Froyland et al., 8 May 2025) |
| Sparsity-promoting Selection | Pruning of non-invariant or spurious components | (Pan et al., 2020) |
These approaches collectively underpin modern Koopman invariant subspace learning, providing a mathematically rigorous foundation for extracting linear models underlying high-dimensional, nonlinear dynamical systems. The advances in quantifying invariance, certifying model error, and developing scalable learning algorithms have cemented Koopman subspace learning as a paradigmatic tool in data-driven dynamical systems, with ongoing research focusing on robustness, adaptivity, and interpretability across scientific and engineering domains.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free