Papers
Topics
Authors
Recent
Search
2000 character limit reached

Linear Matrix Inequalities (LMIs)

Updated 18 April 2026
  • Linear Matrix Inequalities (LMIs) are matrix-valued affine constraints on symmetric or Hermitian matrices that yield convex feasible sets known as spectrahedra.
  • LMIs underpin robust control, stability analysis, and data-driven methods by enabling tractable semidefinite programming solutions.
  • Their manipulation via techniques like the Schur complement and congruence transformations facilitates effective variable reduction and relaxation in complex optimization problems.

A linear matrix inequality (LMI) is a matrix-valued affine constraint in which symmetric or Hermitian matrices, parameterized affinely by some variables, are required to be positive semidefinite. LMIs play a central role throughout modern convex optimization, systems and control theory, stability analysis, robust optimization, real algebraic geometry, sum-of-squares programming, and data-driven and learning-based control frameworks. Their tractability—due to convexity and the existence of efficient semidefinite programming (SDP) solvers—has led to their adoption in both finite- and infinite-dimensional settings.

1. Fundamental Definitions and Geometric Characterization

Given A0,A1,…,An∈SmA_0, A_1, \dots, A_n \in \mathbb{S}^m, the space of real symmetric m×mm \times m matrices, a standard (real) LMI is formulated as: A(x)=A0+∑i=1nxiAi⪰0,A(x) = A_0 + \sum_{i=1}^n x_i A_i \succeq 0, where x=(x1,…,xn)∈Rnx = (x_1, \dots, x_n) \in \mathbb{R}^n and ⪰0\succeq 0 denotes positive semidefiniteness. The feasible set

S={x∈Rn:A(x)⪰0}\mathcal{S} = \{x \in \mathbb{R}^n : A(x) \succeq 0\}

is called a spectrahedron, a closed, convex semi-algebraic subset of Rn\mathbb{R}^n defined by polynomial inequalities (all principal minors of A(x)A(x) nonnegative) (Henrion, 2013).

For spectrahedra arising in the complex plane (e.g., control-oriented LMI regions), the defining characteristic function is Hermitian-valued: fD(z)=L+zM+zˉM⊤,f_{\mathfrak{D}}(z) = L + z M + \bar{z} M^\top, where L,M∈Rn×nL, M \in \mathbb{R}^{n \times n} with m×mm \times m0 symmetric and m×mm \times m1. The LMI region is

m×mm \times m2

with m×mm \times m3 indicating strict negative definiteness (Kushel, 2019).

Key Geometric Properties

  • Boundary description: The boundary is m×mm \times m4.
  • Recession cone: m×mm \times m5; m×mm \times m6 bounded iff m×mm \times m7.
  • Lineality space: Nontrivial only if m×mm \times m8 is symmetric (m×mm \times m9 is a vertical strip) or skew-symmetric (horizontal strip); otherwise, trivial.

LMI regions can often be decomposed as intersections of elementary domains (half-planes, conic sectors, stripes, hyperbolic sides), dependent on spectral structure and commutativity of A(x)=A0+∑i=1nxiAi⪰0,A(x) = A_0 + \sum_{i=1}^n x_i A_i \succeq 0,0, A(x)=A0+∑i=1nxiAi⪰0,A(x) = A_0 + \sum_{i=1}^n x_i A_i \succeq 0,1 (Kushel, 2019).

2. Algebraic and Computational Structure

Representational Power and Limitations

Convex sets defined via LMIs admit semidefinite extended formulations: A(x)=A0+∑i=1nxiAi⪰0,A(x) = A_0 + \sum_{i=1}^n x_i A_i \succeq 0,2 where the minimal A(x)=A0+∑i=1nxiAi⪰0,A(x) = A_0 + \sum_{i=1}^n x_i A_i \succeq 0,3 is the semidefinite extension degree A(x)=A0+∑i=1nxiAi⪰0,A(x) = A_0 + \sum_{i=1}^n x_i A_i \succeq 0,4 (Averkov, 2018). The canonical example is the sum-of-squares (SOS) cone A(x)=A0+∑i=1nxiAi⪰0,A(x) = A_0 + \sum_{i=1}^n x_i A_i \succeq 0,5, representable by a single LMI of size A(x)=A0+∑i=1nxiAi⪰0,A(x) = A_0 + \sum_{i=1}^n x_i A_i \succeq 0,6. This size is optimal: No arrangement of smaller LMIs, regardless of number, suffices to represent SOS cones or the A(x)=A0+∑i=1nxiAi⪰0,A(x) = A_0 + \sum_{i=1}^n x_i A_i \succeq 0,7 cone (Averkov, 2018).

These constraints also demarcate what can be modeled efficiently via SDP; for polynomial optimization, all SOS-based relaxations entail SDPs with block sizes lower-bounded by this combinatorial growth, which rapidly becomes computationally prohibitive (Averkov, 2018).

Spectrahedral Shadows and Lifts

General convex semialgebraic sets may not be spectrahedra but can often be written as spectrahedral shadows—projections of higher-dimensional spectrahedra (Henrion, 2013). Every convex planar semialgebraic set is a spectrahedral shadow.

Feasibility and Parametric Algorithms

Exactly deciding or characterizing feasibility, or finding a point in A(x)=A0+∑i=1nxiAi⪰0,A(x) = A_0 + \sum_{i=1}^n x_i A_i \succeq 0,8, can be achieved via algebraic-geometric algorithms based on the rank stratification of A(x)=A0+∑i=1nxiAi⪰0,A(x) = A_0 + \sum_{i=1}^n x_i A_i \succeq 0,9. Under genericity assumptions, these produce a rational parametrization, exploiting incidence varieties and critical point methods with complexity essentially polynomial in x=(x1,…,xn)∈Rnx = (x_1, \dots, x_n) \in \mathbb{R}^n0 for fixed x=(x1,…,xn)∈Rnx = (x_1, \dots, x_n) \in \mathbb{R}^n1 (Henrion et al., 2015).

For parametric problems (entries polynomial in parameters x=(x1,…,xn)∈Rnx = (x_1, \dots, x_n) \in \mathbb{R}^n2), quantifier-free descriptions of the feasible parameter set can be computed leveraging determinantal and polar variety decompositions, with complexity polynomial in problem size for fixed matrix size and parameter dimension (Naldi et al., 3 Mar 2025).

3. Principal Theoretical Developments: Properties, Manipulations, and Relaxations

Fundamental Manipulation Techniques

  • Schur Complement Lemma: Central for transforming and interpreting LMI constraints; for block-matrix x=(x1,…,xn)∈Rnx = (x_1, \dots, x_n) \in \mathbb{R}^n3, x=(x1,…,xn)∈Rnx = (x_1, \dots, x_n) \in \mathbb{R}^n4, the LMI is equivalent to x=(x1,…,xn)∈Rnx = (x_1, \dots, x_n) \in \mathbb{R}^n5.
  • Congruence and Scaling Transformations: For nonsingular x=(x1,…,xn)∈Rnx = (x_1, \dots, x_n) \in \mathbb{R}^n6, x=(x1,…,xn)∈Rnx = (x_1, \dots, x_n) \in \mathbb{R}^n7 iff x=(x1,…,xn)∈Rnx = (x_1, \dots, x_n) \in \mathbb{R}^n8.
  • Projection and S-procedure: Used for variable elimination and parameter-dependent relaxations.
  • Finsler's Lemma: LMI feasibility under affine relationships.

These tools underpin variable reductions, relaxations, and derivation of numerous equivalent forms of key control and system-theoretic LMIs (e.g., Lyapunov, bounded-real, and Kalman-Yakubovich-Popov (KYP) variations) (Caverly et al., 2019).

Hierarchies and Generalizations

LMIs are a special case of semidefinite representable convex constraints and can be hierarchically related: increasing the minimal matrix size strictly increases the representable set class (Averkov, 2018). Recent research explores less conservative relaxations for parameterized LMIs appearing in fuzzy and robust control using sum-relaxation methods such as Young's inequality, with demonstrated improvements in conservatism and feasibility (Kim et al., 2021).

4. Applications and Impact in Systems and Control

Controller Synthesis and Analysis

LMIs underpin robust and optimal control design, including static and dynamic output feedback, observer design, and certification of dissipativity, passivity, and stability properties for LTI, nonlinear, port-Hamiltonian, and linear-parameter-varying (LPV) systems. For port-Hamiltonian systems, LMI-based designs yield modular (observer/feedback) gain selection guaranteeing exponential or asymptotic stability, encoded as convex matrix inequalities in physical energy and dissipation metrics (Toledo et al., 2020).

For data-driven or model-free settings (e.g., learning controllers to respect specified LMI regions in the complex plane, such as disk or conic damping regions), robust pole placement can be achieved based solely on experimental open-loop data and convex LMI feasibility regions for all dynamics consistent with observed trajectories and noise models (Bisoffi et al., 2021).

Lyapunov, IQCs, and Trajectory Optimization

LMI formulations enable convex searches for polynomial Lyapunov or storage functions (including in Lyapunov-based global optimization, integral quadratic constraint stability analysis, trajectory and occupation measure methods) (Henrion, 2013). In IQC-based absolute stability, infeasibility of the primal LMI can be used (through LMI duality) to construct explicit destabilizing nonlinearities, closing the sufficiency-necessity gap in classical stability analysis (Gyotoku et al., 2024).

Learning and Data-Driven Control

Modern frameworks employ LMI constraints inside neural network architectures to "certify by construction" (e.g., via differentiable Douglas-Rachford projection layers) satisfaction of stability or invariance properties, enabling robust learning-based control with formal guarantees beyond what penalty-based or soft-constrained loss functions deliver (Tang et al., 7 Apr 2026).

Reinforcement Learning

Quadratic Q-function structures subject to LMI constraints in the form of semidefinite programs (via Schur-complement relaxations) enable data-efficient, direct Q-learning algorithms, robustly learning stabilizing policies with far fewer samples than traditional schemes (Hulst et al., 2024).

System Identification

Physical consistency of identified inertial parameters in robotics can be exactly characterized as LMIs on moment matrices (pseudo-inertia), enforcing triangle inequalities and bounding-volume support (localizing constraints), resulting in more robust and sample-efficient parameter identification (Wensing et al., 2017).

Robust and Time-Varying Systems

Stability of piecewise linear time-varying (LTV) systems and robust performance under parametric uncertainty can be expressed as sequences of LMIs, reducible to finite-dimensional feasibility problems even under time or parameter variation, using S-procedure techniques and bisection algorithms to determine maximal uncertainty bounds (Ahmed et al., 2023).

Guaranteed-Time and Reachability

Piecewise-quadratic or piecewise-polynomial Lyapunov functions constructed via LMIs over simplicial partitions, with harmonic transformation and structural relaxations, deliver certified bounds on worst-case reaching time in constrained or uncertain nonlinear systems (Campos et al., 3 Oct 2025).

5. Free Spectrahedra, Operator-Theoretic Extensions, and Relaxation Hierarchies

Free LMI Relaxations and Commutativity

Matrix-variable relaxations (allowing noncommuting variables) define free spectrahedra: x=(x1,…,xn)∈Rnx = (x_1, \dots, x_n) \in \mathbb{R}^n9, where ⪰0\succeq 00. These noncommutative domains generalize scalar-valued LMIs and underlie key developments in dilation theory and operator algebras (Helton et al., 2010, Helton et al., 2014).

Dilation Theorems and Scale Factors

Simultaneous dilation results establish that for each ⪰0\succeq 01, all ⪰0\succeq 02 symmetric contractions ⪰0\succeq 03 are compressions, up to a sharp constant ⪰0\succeq 04, of tuples of commuting self-adjoint operators with spectrum in the classical LMI region. Analytic formulas for ⪰0\succeq 05, derived using Beta distributions and combinatorial identities, determine the tightest scale for relaxation error between free and classical spectrahedron inclusions (Helton et al., 2014).

Inclusion, Dominance, and Complete Positivity

Containment and dominance between LMI regions (⪰0\succeq 06) are characterized via the theory of completely positive maps. The Choi matrix criterion translates set inclusion to explicit semidefinite programs; LMI regions are unitarily equivalent iff their minimal defining pencils are equivalent under congruence (linear Gleichstellensatz) (Helton et al., 2010, Helton et al., 2014).

Positivstellensatz results state that any polynomial matrix strictly positive on a bounded spectrahedron can be represented as a sum of Hermitian squares plus LMI-structured terms, without archimedean hypotheses (Helton et al., 2010).

6. Numerical Algorithms, Complexity, and Analytic Center Bounds

SDP Solvers and Interior-Point Methods

For LMIs embedded in SDPs, primal-dual interior-point methods are standard, with convergence guarantees provided mild regularity (Slater's condition/interiority). Each Newton step's cost is ⪰0\succeq 07 for dense problems, but sparsity and symmetry can be exploited (Henrion, 2013).

Linearly convergent first-order algorithms (restart subgradient or Nesterov's accelerated method) are available for large-scale feasibility, given global error bounds under Slater-type conditions (Dang et al., 2013).

Analytic Center and Approximation Quality

The analytic center of a spectrahedron, defined as the positive definite ⪰0\succeq 08 maximizing log det ⪰0\succeq 09 over the feasible LMI set, admits new sharp entry-wise accuracy (Frobenius-norm) bounds in terms of the duality (log-det) gap via the properties of the Lambert S={x∈Rn:A(x)⪰0}\mathcal{S} = \{x \in \mathbb{R}^n : A(x) \succeq 0\}0 function. This directly informs when to terminate interior-point algorithms for certified parameter-wise accuracy (Roig-Solvas et al., 2020).

7. Outlook, Open Directions, and Limits

The theoretical boundaries of LMI representability—what classes of convex sets can be described by LMIs or their spectrahedral shadows—remain a subject of ongoing research, especially in high dimensions and for nonpolyhedral sets. For parametric and symbolic problems, specialized algebraic-geometric algorithms now provide quantifier-free certificates and exact feasibility regions for generic data, at polynomial complexity in moderate dimensions (Naldi et al., 3 Mar 2025). In high-complexity domains (e.g., robust control, fuzzy systems, neural certificate synthesis), efficient relaxations and structural exploitation (sparsity, symmetry, sum-relaxation, tensor lifting) are critical for practical tractability.

Table: Principal Categories of LMI Applications and Formulations

Domain LMI Structure/Formulation Reference(s)
State feedback synthesis S={x∈Rn:A(x)⪰0}\mathcal{S} = \{x \in \mathbb{R}^n : A(x) \succeq 0\}1 (Caverly et al., 2019, Toledo et al., 2020)
Robust control/data-driven pole S={x∈Rn:A(x)⪰0}\mathcal{S} = \{x \in \mathbb{R}^n : A(x) \succeq 0\}2 (collect over S={x∈Rn:A(x)⪰0}\mathcal{S} = \{x \in \mathbb{R}^n : A(x) \succeq 0\}3) (Bisoffi et al., 2021)
SOS polynomial optimization S={x∈Rn:A(x)⪰0}\mathcal{S} = \{x \in \mathbb{R}^n : A(x) \succeq 0\}4 (Averkov, 2018, Henrion, 2013)
Piecewise LTV stability Segmentwise quadratic S={x∈Rn:A(x)⪰0}\mathcal{S} = \{x \in \mathbb{R}^n : A(x) \succeq 0\}5 (Ahmed et al., 2023)
Neural control with certification S={x∈Rn:A(x)⪰0}\mathcal{S} = \{x \in \mathbb{R}^n : A(x) \succeq 0\}6, enforced by differentiable projection (Tang et al., 7 Apr 2026)
Inertial parameter identification S={x∈Rn:A(x)⪰0}\mathcal{S} = \{x \in \mathbb{R}^n : A(x) \succeq 0\}7 (Wensing et al., 2017)

LMIs remain the pre-eminent tractable tool for representing, analyzing, and certifying a diversity of convex specifications in systems, optimization, and data analysis. Their integration into emerging data-driven and learning-based methodologies continues to drive methodological and computational advances across control, machine learning, optimization, and applied mathematics.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Linear Matrix Inequalities (LMIs).