Linear State-Space Representation
- Linear State-Space Representation is a formalism that uses matrices and linear operators to model the evolution and observation of dynamical systems.
- It encompasses various models including LTI, parameter-varying, stochastic, and tensor forms, unifying methods for analysis, identification, and control.
- Advanced techniques like realization theory, subspace identification, and structure-preserving linearizations enhance practical applications in control, estimation, and simulation.
A linear state-space representation expresses the evolution and observation of dynamical systems via linear algebraic and operator-theoretic structures, providing a unifying formalism for the analysis, identification, simulation, and control of both finite- and infinite-dimensional models. This concept encompasses classical linear time-invariant (LTI), parameter-varying, stochastic, and tensor-structured models, and extends through realization theory to general classes of input-output or transfer function descriptions.
1. Canonical Forms of Linear State-Space Models
The classical continuous-time LTI state-space model is specified as
where is the state vector, is the input, is the output, and are real-valued matrices. In discrete time, one replaces the differential with an advance operator: (Murthy, 2013, Mercère, 2013).
The innovation form, particularly relevant for identification and estimation, augments the process with a noise-injection (Kalman gain) term: where is the innovation process (Mercère, 2013).
Extensions encompass
- Parameter-varying systems: The state update and output matrices depend affinely on a scheduling parameter, forming the class of LPV-SSA (static affine dependence) models (Petreczky et al., 2016).
- Stochastic/state-driven processes: State evolution may be driven by Lévy processes or Brownian motion, yielding stochastic differential equations with possibly heavy-tailed or non-Gaussian increments (Godsill et al., 2019, Jordán et al., 2021).
- Tensor state-space models: The state, inputs, and outputs can be represented as higher-order tensors, with multilinear evolution equations governing the coupled dynamics of spatially or structurally complex systems (Murthy, 2013).
- Infinite-dimensional systems: PDEs or systems with distributed parameters admit PIE (partial-integral equation) state-space representations, formulated on appropriate function spaces with unbounded operators and infinite-dimensional states (Jagt et al., 20 Aug 2025).
2. Realization Theory and Minimality
A central problem is realization: given an input-output (IO) function, under what conditions does there exist a finite-dimensional linear state-space system that reproduces its behavior? For LTI systems, realization theory is classically based on the Hankel matrix of Markov parameters (impulse response coefficients).
For LPV-SSA systems, realization is characterized by a Volterra-series-type IIR, and the minimal state dimension equals the rank of the associated infinite Hankel matrix constructed from the sequence of sub-Markov parameters:
- Existence: The IO map is realizable by an LPV-SSA if and only if the corresponding Hankel matrix is finite rank; the rank gives the dimension of a minimal realization.
- Minimality: A realization is minimal iff it is both span-reachable and observable.
- Isomorphism: Any two minimal LPV-SSA realizations of the same IO function are related by a state-space isomorphism independent of the scheduling parameter (Petreczky et al., 2016).
For general multivariable systems with rational transfer function , vector spaces of state-space linearizations (matrix pencils) are constructed to linearize the transfer operator. Almost all pencils in these spaces are strong linearizations (Rosenbrock–linearizations), and minimal realizations are recovered directly from the block structure of the pencil (Bist et al., 2024).
3. Identification, Structure, and Computational Techniques
State-space identification aims to recover system matrices from input-output data:
- Subspace Identification: Involves regression of stacked future outputs as a linear function of past input/output blocks and future inputs, leveraging block-Hankel matrices and enforcing low-rank constraints via SVD to recover extended observability and controllability factors, and ultimately the (A,B,C,D,K) matrices. Open- and closed-loop variants employ orthogonal projections or predictor-based residual whitening (Mercère, 2013).
- Tensor Models and Multi-scale Systems: Multilinear algebra generalizes state-space identification to high-order tensor trajectories, allowing simultaneous modeling of coupled multi-dimensional signals, and supports multi-rate/multi-scale computation via time-lifting (Murthy, 2013).
- Analytic Kernel Models: For Gaussian processes with rational spectral densities (e.g., Matérn or SHO kernels), analytic state-space representations enable O(n) likelihood evaluations and Kalman filtering for scalable time series inference (Jordán et al., 2021).
4. Extensions: Time-Variability, Non-Gaussianity, and Infinite Dimensions
Time-varying Dynamics: Parameterization of the state transition matrix as a time-dependent linear combination of basis matrices with Markovian (latent) weights allows for smooth or piecewise switching in system dynamics: This enables richer modeling of physical systems with smoothly evolving or regime-switching dynamics, and variational Bayesian inference with ARD (automatic relevance determination) priors allows for model selection and component pruning (Luttinen et al., 2014).
Non-Gaussian and Heavy-tailed State Drivers: SDEs driven by general non-Gaussian Lévy processes (e.g., α-stable) are cast in state-space form by representing the Lévy process via a shot-noise Poisson sum and embedding it into the system update, facilitating marginalization of latent variables and Kalman filtering within a sequential Monte Carlo framework (Godsill et al., 2019).
Infinite-dimensional Systems and PDEs: The evolution of coupled PDEs in multiple spatial variables is expressed as a PIE state-space, where the inverse of the spatial differential operator is constructed analytically as a partial-integral (PI) operator. The operator algebra of PI operators generalizes the matrix algebra of LTI systems, enabling convex stability and control analysis via semi-definite programming on matrix-parameterized kernels (Jagt et al., 20 Aug 2025).
5. State-Space Representations in Control, Estimation, and Optimization
The state-space approach is foundational for effective design and analysis:
- Filter and Estimator Design: Kalman filtering, both in classical Gaussian and generalized Lévy/non-Gaussian scenarios, exploits linear state-space structure for recursive Bayesian estimation, O(n) computational scaling, and tractable prediction (Jordán et al., 2021, Godsill et al., 2019).
- Control Synthesis and Rational Interpolation: Solutions to interpolation problems under norm constraints (e.g., H∞ Leech problem) admit complete characterizations in terms of linear-fractional state-space maps whose coefficients are computable via Riccati equations and spectral factorization, providing explicit parametrizations of all admissible stable solutions (Frazho et al., 2014).
- Transfer Function Linearization and Structural Preservation: The vector space of linearizations not only guarantees the existence of strong minimal linearizations for any rational transfer function but also allows recovery of structure-preserving (symmetric/Hermitian) state-space forms when the system exhibits these symmetries (Bist et al., 2024).
6. Emerging Trends: Structural Theory and Algorithmic Developments
Current research extends state-space representations across several axes:
- Partial Realization and Data-Driven Construction: Partial realization theory and algorithms such as the LPV Ho–Kalman algorithm enable the construction of minimal and partial state-space models from finite IO data, with precise guarantees based on the observed Hankel-rank sequence (Petreczky et al., 2016).
- Function-Space and Operator Algebraic Methods: The operator-theoretic generalization in PIE state-space provides a bridge to infinite-dimensional analysis, with operator convexity and positivity constraints ensuring computational tractability in large-scale or distributed PDE systems (Jagt et al., 20 Aug 2025).
- Structure-Preserving Linearizations and Parametrizations: The construction of ansatz spaces for linearizations leads to generic frameworks for multivariable and polynomial/rational systems, including systematic recovery of minimal realizations and explicit structure-preserving forms (Bist et al., 2024).