Linear Matrix Inequalities (LMIs)
- Linear Matrix Inequalities (LMIs) are matrix-valued affine constraints on symmetric or Hermitian matrices that yield convex feasible sets known as spectrahedra.
- LMIs underpin robust control, stability analysis, and data-driven methods by enabling tractable semidefinite programming solutions.
- Their manipulation via techniques like the Schur complement and congruence transformations facilitates effective variable reduction and relaxation in complex optimization problems.
A linear matrix inequality (LMI) is a matrix-valued affine constraint in which symmetric or Hermitian matrices, parameterized affinely by some variables, are required to be positive semidefinite. LMIs play a central role throughout modern convex optimization, systems and control theory, stability analysis, robust optimization, real algebraic geometry, sum-of-squares programming, and data-driven and learning-based control frameworks. Their tractability—due to convexity and the existence of efficient semidefinite programming (SDP) solvers—has led to their adoption in both finite- and infinite-dimensional settings.
1. Fundamental Definitions and Geometric Characterization
Given , the space of real symmetric matrices, a standard (real) LMI is formulated as: where and denotes positive semidefiniteness. The feasible set
is called a spectrahedron, a closed, convex semi-algebraic subset of defined by polynomial inequalities (all principal minors of nonnegative) (Henrion, 2013).
For spectrahedra arising in the complex plane (e.g., control-oriented LMI regions), the defining characteristic function is Hermitian-valued: where with 0 symmetric and 1. The LMI region is
2
with 3 indicating strict negative definiteness (Kushel, 2019).
Key Geometric Properties
- Boundary description: The boundary is 4.
- Recession cone: 5; 6 bounded iff 7.
- Lineality space: Nontrivial only if 8 is symmetric (9 is a vertical strip) or skew-symmetric (horizontal strip); otherwise, trivial.
LMI regions can often be decomposed as intersections of elementary domains (half-planes, conic sectors, stripes, hyperbolic sides), dependent on spectral structure and commutativity of 0, 1 (Kushel, 2019).
2. Algebraic and Computational Structure
Representational Power and Limitations
Convex sets defined via LMIs admit semidefinite extended formulations: 2 where the minimal 3 is the semidefinite extension degree 4 (Averkov, 2018). The canonical example is the sum-of-squares (SOS) cone 5, representable by a single LMI of size 6. This size is optimal: No arrangement of smaller LMIs, regardless of number, suffices to represent SOS cones or the 7 cone (Averkov, 2018).
These constraints also demarcate what can be modeled efficiently via SDP; for polynomial optimization, all SOS-based relaxations entail SDPs with block sizes lower-bounded by this combinatorial growth, which rapidly becomes computationally prohibitive (Averkov, 2018).
Spectrahedral Shadows and Lifts
General convex semialgebraic sets may not be spectrahedra but can often be written as spectrahedral shadows—projections of higher-dimensional spectrahedra (Henrion, 2013). Every convex planar semialgebraic set is a spectrahedral shadow.
Feasibility and Parametric Algorithms
Exactly deciding or characterizing feasibility, or finding a point in 8, can be achieved via algebraic-geometric algorithms based on the rank stratification of 9. Under genericity assumptions, these produce a rational parametrization, exploiting incidence varieties and critical point methods with complexity essentially polynomial in 0 for fixed 1 (Henrion et al., 2015).
For parametric problems (entries polynomial in parameters 2), quantifier-free descriptions of the feasible parameter set can be computed leveraging determinantal and polar variety decompositions, with complexity polynomial in problem size for fixed matrix size and parameter dimension (Naldi et al., 3 Mar 2025).
3. Principal Theoretical Developments: Properties, Manipulations, and Relaxations
Fundamental Manipulation Techniques
- Schur Complement Lemma: Central for transforming and interpreting LMI constraints; for block-matrix 3, 4, the LMI is equivalent to 5.
- Congruence and Scaling Transformations: For nonsingular 6, 7 iff 8.
- Projection and S-procedure: Used for variable elimination and parameter-dependent relaxations.
- Finsler's Lemma: LMI feasibility under affine relationships.
These tools underpin variable reductions, relaxations, and derivation of numerous equivalent forms of key control and system-theoretic LMIs (e.g., Lyapunov, bounded-real, and Kalman-Yakubovich-Popov (KYP) variations) (Caverly et al., 2019).
Hierarchies and Generalizations
LMIs are a special case of semidefinite representable convex constraints and can be hierarchically related: increasing the minimal matrix size strictly increases the representable set class (Averkov, 2018). Recent research explores less conservative relaxations for parameterized LMIs appearing in fuzzy and robust control using sum-relaxation methods such as Young's inequality, with demonstrated improvements in conservatism and feasibility (Kim et al., 2021).
4. Applications and Impact in Systems and Control
Controller Synthesis and Analysis
LMIs underpin robust and optimal control design, including static and dynamic output feedback, observer design, and certification of dissipativity, passivity, and stability properties for LTI, nonlinear, port-Hamiltonian, and linear-parameter-varying (LPV) systems. For port-Hamiltonian systems, LMI-based designs yield modular (observer/feedback) gain selection guaranteeing exponential or asymptotic stability, encoded as convex matrix inequalities in physical energy and dissipation metrics (Toledo et al., 2020).
For data-driven or model-free settings (e.g., learning controllers to respect specified LMI regions in the complex plane, such as disk or conic damping regions), robust pole placement can be achieved based solely on experimental open-loop data and convex LMI feasibility regions for all dynamics consistent with observed trajectories and noise models (Bisoffi et al., 2021).
Lyapunov, IQCs, and Trajectory Optimization
LMI formulations enable convex searches for polynomial Lyapunov or storage functions (including in Lyapunov-based global optimization, integral quadratic constraint stability analysis, trajectory and occupation measure methods) (Henrion, 2013). In IQC-based absolute stability, infeasibility of the primal LMI can be used (through LMI duality) to construct explicit destabilizing nonlinearities, closing the sufficiency-necessity gap in classical stability analysis (Gyotoku et al., 2024).
Learning and Data-Driven Control
Modern frameworks employ LMI constraints inside neural network architectures to "certify by construction" (e.g., via differentiable Douglas-Rachford projection layers) satisfaction of stability or invariance properties, enabling robust learning-based control with formal guarantees beyond what penalty-based or soft-constrained loss functions deliver (Tang et al., 7 Apr 2026).
Reinforcement Learning
Quadratic Q-function structures subject to LMI constraints in the form of semidefinite programs (via Schur-complement relaxations) enable data-efficient, direct Q-learning algorithms, robustly learning stabilizing policies with far fewer samples than traditional schemes (Hulst et al., 2024).
System Identification
Physical consistency of identified inertial parameters in robotics can be exactly characterized as LMIs on moment matrices (pseudo-inertia), enforcing triangle inequalities and bounding-volume support (localizing constraints), resulting in more robust and sample-efficient parameter identification (Wensing et al., 2017).
Robust and Time-Varying Systems
Stability of piecewise linear time-varying (LTV) systems and robust performance under parametric uncertainty can be expressed as sequences of LMIs, reducible to finite-dimensional feasibility problems even under time or parameter variation, using S-procedure techniques and bisection algorithms to determine maximal uncertainty bounds (Ahmed et al., 2023).
Guaranteed-Time and Reachability
Piecewise-quadratic or piecewise-polynomial Lyapunov functions constructed via LMIs over simplicial partitions, with harmonic transformation and structural relaxations, deliver certified bounds on worst-case reaching time in constrained or uncertain nonlinear systems (Campos et al., 3 Oct 2025).
5. Free Spectrahedra, Operator-Theoretic Extensions, and Relaxation Hierarchies
Free LMI Relaxations and Commutativity
Matrix-variable relaxations (allowing noncommuting variables) define free spectrahedra: 9, where 0. These noncommutative domains generalize scalar-valued LMIs and underlie key developments in dilation theory and operator algebras (Helton et al., 2010, Helton et al., 2014).
Dilation Theorems and Scale Factors
Simultaneous dilation results establish that for each 1, all 2 symmetric contractions 3 are compressions, up to a sharp constant 4, of tuples of commuting self-adjoint operators with spectrum in the classical LMI region. Analytic formulas for 5, derived using Beta distributions and combinatorial identities, determine the tightest scale for relaxation error between free and classical spectrahedron inclusions (Helton et al., 2014).
Inclusion, Dominance, and Complete Positivity
Containment and dominance between LMI regions (6) are characterized via the theory of completely positive maps. The Choi matrix criterion translates set inclusion to explicit semidefinite programs; LMI regions are unitarily equivalent iff their minimal defining pencils are equivalent under congruence (linear Gleichstellensatz) (Helton et al., 2010, Helton et al., 2014).
Positivstellensatz results state that any polynomial matrix strictly positive on a bounded spectrahedron can be represented as a sum of Hermitian squares plus LMI-structured terms, without archimedean hypotheses (Helton et al., 2010).
6. Numerical Algorithms, Complexity, and Analytic Center Bounds
SDP Solvers and Interior-Point Methods
For LMIs embedded in SDPs, primal-dual interior-point methods are standard, with convergence guarantees provided mild regularity (Slater's condition/interiority). Each Newton step's cost is 7 for dense problems, but sparsity and symmetry can be exploited (Henrion, 2013).
Linearly convergent first-order algorithms (restart subgradient or Nesterov's accelerated method) are available for large-scale feasibility, given global error bounds under Slater-type conditions (Dang et al., 2013).
Analytic Center and Approximation Quality
The analytic center of a spectrahedron, defined as the positive definite 8 maximizing log det 9 over the feasible LMI set, admits new sharp entry-wise accuracy (Frobenius-norm) bounds in terms of the duality (log-det) gap via the properties of the Lambert 0 function. This directly informs when to terminate interior-point algorithms for certified parameter-wise accuracy (Roig-Solvas et al., 2020).
7. Outlook, Open Directions, and Limits
The theoretical boundaries of LMI representability—what classes of convex sets can be described by LMIs or their spectrahedral shadows—remain a subject of ongoing research, especially in high dimensions and for nonpolyhedral sets. For parametric and symbolic problems, specialized algebraic-geometric algorithms now provide quantifier-free certificates and exact feasibility regions for generic data, at polynomial complexity in moderate dimensions (Naldi et al., 3 Mar 2025). In high-complexity domains (e.g., robust control, fuzzy systems, neural certificate synthesis), efficient relaxations and structural exploitation (sparsity, symmetry, sum-relaxation, tensor lifting) are critical for practical tractability.
Table: Principal Categories of LMI Applications and Formulations
| Domain | LMI Structure/Formulation | Reference(s) |
|---|---|---|
| State feedback synthesis | 1 | (Caverly et al., 2019, Toledo et al., 2020) |
| Robust control/data-driven pole | 2 (collect over 3) | (Bisoffi et al., 2021) |
| SOS polynomial optimization | 4 | (Averkov, 2018, Henrion, 2013) |
| Piecewise LTV stability | Segmentwise quadratic 5 | (Ahmed et al., 2023) |
| Neural control with certification | 6, enforced by differentiable projection | (Tang et al., 7 Apr 2026) |
| Inertial parameter identification | 7 | (Wensing et al., 2017) |
LMIs remain the pre-eminent tractable tool for representing, analyzing, and certifying a diversity of convex specifications in systems, optimization, and data analysis. Their integration into emerging data-driven and learning-based methodologies continues to drive methodological and computational advances across control, machine learning, optimization, and applied mathematics.