Papers
Topics
Authors
Recent
Search
2000 character limit reached

Quadratically Constrained Linear Programming (QCLP)

Updated 19 March 2026
  • QCLP is an optimization framework defined by a linear objective and quadratic constraints, where the nonlinearity is confined to the constraints.
  • Convex relaxations like SOCP and parabolic methods transform QCLP into tractable forms under specific convexity conditions, ensuring tight approximations.
  • Applications of QCLP span causal inference, robust template matching, and sparse kernel feature selection, using tailored algorithms for efficiency and scalability.

A quadratically constrained linear program (QCLP) is an optimization problem characterized by a linear objective function and one or more quadratic constraints. The classical form is: minxRn  cTxsubject toxTQix+aiTx+bi0,  (i=1,,m)\min_{x\in\mathbb{R}^n} \; c^T x \quad \text{subject to} \quad x^T Q_ix + a_i^T x + b_i \le 0,\; (i=1,\ldots,m) where cRnc\in\mathbb{R}^n is the cost vector, each QiQ_i is a symmetric matrix, aiRna_i\in\mathbb{R}^n, and biRb_i\in\mathbb{R}. QCLP is a specialized subclass of quadratically constrained quadratic programming (QCQP), notable for the linear objective which grants it certain structural and algorithmic advantages, especially in convex settings.

1. Structural and Mathematical Properties

In QCLP, all nonlinearity resides in the constraint set; the objective remains strictly linear. The feasible region is, in general, non-convex due to potentially indefinite QiQ_i, although many important special cases (trust region, ellipsoidal constraints, etc.) are convex. Convexity results hold when each Qi0Q_i \succeq 0, rendering the constraint sets convex and endowing the problem with strong duality and tractable convex relaxations.

When the feasible set is the intersection of quadratic and linear constraints, the Karush-Kuhn-Tucker (KKT) conditions provide necessary and, under convexity, sufficient optimality criteria. For example, for

mincTxs.t.xTQx+aTx+b0\min c^T x \quad \text{s.t.} \quad x^T Q x + a^T x + b \le 0

the optimality system comprises a stationarity equation

c+2λQx+λa=0c + 2 \lambda Q x^* + \lambda a = 0

with λ0\lambda \ge 0, primal feasibility, complementary slackness, and dual feasibility (Madani et al., 2022).

2. Convex Relaxations and Exact Reformulations

QCLP admits several convex reformulations that preserve optimality under suitable structural assumptions:

  • Second-Order Cone Programming (SOCP) Relaxation: For QCLPs with Qi0Q_i \succeq 0, simultaneous diagonalization (SD) of constraints can transform each quadratic constraint into a sum of univariate quadratic forms, which can be represented as SOCP constraints. Exactness is guaranteed for single or two quadratic constraints under mild Slater-type conditions. For a single convex quadratic constraint, the SOCP formulation is always tight; for two, it remains exact if the constraint set is strictly feasible (Jiang et al., 2015).
  • Parabolic Relaxation: Introducing a lifted matrix variable XxxTX \succeq xx^T and imposing "parabolic constraints" between XX and xx provides an alternative convexification:

minx,X  cTx  s.t. quadratic and parabolic constraints\min_{x,X}\; c^T x \;\text{s.t. } \text{quadratic and parabolic constraints}

A sequential penalized algorithm driving XxxTX \rightarrow xx^T converges to KKT points of the original QCLP under standard regularity and sufficiently large penalty parameter η\eta (Madani et al., 2022). In practice, this relaxation is empirically tighter than standard SOCP for some classes.

  • Convex Hull via Disjunctive SOC Representation: The convex hull of the intersection of a quadratic constraint with a bounded polyhedron is second-order cone representable. A constructive, facet-wise induction yields a finite SOC-representable convex set, ensuring that the corresponding SOCP relaxation is globally tight without relaxation gap (Santana et al., 2018).
  • Linear Programming (LP) Outer Approximation: By lifting X=xxTX = xx^T and iteratively adding cutting planes corresponding to PSD constraints (dense, sparse, and minor-based cuts), LP relaxations can closely approximate the SDP strength required for QCLP bounds, empirically closing most of the duality gap with significantly less computational overhead than SDP solvers (Qualizza et al., 2012).

3. Algorithmic Approaches

Several algorithmic frameworks, leveraging the problem's convexity or structure, are prominent for QCLP:

  • Interior-Point Methods for SOCP/SDP: When quadratic constraints are convex, interior-point methods for SOCPs or SDPs solve the relaxations efficiently for small-to-medium scale instances (Jiang et al., 2015).
  • Cutting-Plane and LP-Based Approaches: For large-scale or sparse problems, outer approximation by LP with automatically generated PSD cuts, including dense eigenvector-based, sparse, or minor-based cuts, is highly scalable. Empirical studies show that the bulk of the SDP dual gap is closed within tens of cutting-plane rounds, often with dramatically reduced solve times compared to direct SDP approaches (Qualizza et al., 2012).
  • Sequential Convex Programming (Parabolic or SOCP-based): For nonconvex QCLPs, sequential penalized parabolic relaxations or branch-based approaches are used to recover globally feasible points iteratively, with proven convergence to KKT points given appropriate penalization (Madani et al., 2022).
  • Large-scale Approximate Linearization: For massive dimensionality, quadratic constraints can be approximated via tangent-plane sampling using low-discrepancy point sets. The quadratic constraint is replaced by a bundle of NN affine constraints, yielding a pure LP whose solution converges to the QCLP optimum at rate O((logN)n2/N)O((\log N)^{n-2}/N) under standard regularity, making the approach tractable for nn up to 10610^6 (Basu et al., 2017).

The table below summarizes selected algorithmic frameworks for QCLP:

Approach Key Property Reference
SOCP Reformulation Tight for 1-2 constraints (Jiang et al., 2015)
Parabolic Relaxation Sublinear convergence (Madani et al., 2022)
LP + PSD Cuts SDP bound, scalable (Qualizza et al., 2012)
Low-discrepancy Linear Scalable to 10610^6 dims (Basu et al., 2017)

4. Domain-Specific Applications

QCLP arises in multiple substantive domains:

  • Causal Inference in Observational Studies: QCLP provides a rigorous framework for sensitivity analysis under hidden confounding in matched studies with multiple outcomes. It permits simultaneous maximization of the "least significant" test among several while enforcing a unified hidden bias allocation, yielding strictly increased statistical power compared to Bonferroni or composition approaches, all with correct familywise error-rate control (Fogarty et al., 2015).
  • Sparse Kernel Feature Selection in Machine Learning: QCLP is employed as a relaxation of mixed-integer feature selection in support vector data description for anomaly detection. Here, QCLP provides a convex surrogate for the feature selection combinatorial subproblem in the empirical kernel feature space, solved efficiently using an iterative cutting-plane loop (Peng et al., 2015).
  • Robust Template Matching in Classification: The regularized maximin correlation approach reformulates robust linear template optimization as a QCLP, amenable to the kernel trick for nonlinear representation and featuring scalable primal-dual solution methods (Lee et al., 2015).

These applications underscore QCLP’s ability to model complex restrictions (hidden bias, sparsity, template robustness) within the tractable interface of convex programming.

5. Advanced Theoretical Insights

Multiple lines of research have deepened understanding of QCLP’s mathematical underpinnings:

  • Simultaneous Diagonalization as a Pathway to SOCP: The equivalence between simultaneous diagonalizability of the constraint matrices and the existence of exact SOCP reformulations for multi-constraint QCLP/QCQP has been precisely characterized. For two constraints, necessary and sufficient conditions for such diagonalizability yield a complete characterization of when convex SOCP-based algorithms deliver the true global optimum (Jiang et al., 2015).
  • Convex Hull Results: The exact second-order cone representability of the convex hull of a quadratic constraint over a polytope provides a theoretical guarantee for cutting-plane or relaxation algorithms in non-convex settings. For polytopes with few facets ("active" faces), this induction-based construction enables efficient SOCP modeling, though in general the number of required disjuncts is exponential in the number of facets (Santana et al., 2018).
  • Generic Scalability Principles: For problems with dimension n104n\gg 10^4, reliance on sampling-based LP outer approximations or kernel-trick-enabled QCLPs is necessary for computational tractability (Basu et al., 2017, Lee et al., 2015). Empirical kernel feature space mappings preserve geometry for nonlinear kernels in machine learning contexts (Peng et al., 2015).

6. Computational Strategies and Performance

While classical interior-point algorithms are effective for small to moderate problems, modern QCLP algorithms exploit structure and use hybrid or approximate relaxations for scalability:

  • Cutting-plane LP with McCormick (RLT) Bounds and PSD Cuts: By iteratively adding violated sparse/dense PSD cuts to a master problem with bound constraints, nearly the full SDP strength is captured. Empirical results demonstrate closure of 90% of the duality gap in under 50 rounds of cutting planes for standard testsets with n50n\leq 50 (Qualizza et al., 2012).
  • Parabolic Sequential Penalization: Each iteration solves a convex QP with O(n2)O(n^2) parabolic constraints, and the empirically sublinear convergence is observed. This method achieves tighter dual bounds and global convergence in moderate dimensions (Madani et al., 2022).
  • Low-Discrepancy Sampling for Large Scale: By construction of boundary tangent planes using quasi-Monte Carlo point sets on ellipsoidal boundaries, large dimensional QCLPs are reduced to tractable large-scale LPs. Convergence rates and error bounds are explicit, and computational cost is dominated by sampling and constraint assembly (Basu et al., 2017).
  • Kernelized QCLP in Machine Learning: The dual QCLP forms, amenable to kernelization, handle nonlinear data while enabling selection between primal or dual solution paths based on problem size and conditioning (Lee et al., 2015).

Resource allocation between these algorithmic forms is guided by dimension count, convexity, and the objective structure.

7. Generalizations, Open Problems, and Future Directions

Recent developments suggest several directions:

  • More General Nonconvex QCLP: While convex relaxation and SOCP/SDP reformulations are well-understood for positive semidefinite QiQ_i, the design of tight, computable relaxations remains challenging for indefinite quadratic constraints, especially as the number of constraints increases (Santana et al., 2018).
  • Integration with Hierarchical and Closed Testing in Multiple Comparisons: The QCLP-based sensitivity framework for multiple outcomes in causal inference is extensible to any intersection null, allowing seamless joint error control in closed testing procedures (Fogarty et al., 2015).
  • Correlation Exploitation in Sequential Analysis: Current QCLP methods in multivariate sensitivity analysis assume independence of strata; leveraging cross-strata or cross-statistic correlations remains an open problem (Fogarty et al., 2015).
  • Hybrid Exact-Approximate Pipeline: Combining parabolic or SOCP relaxations with cutting-plane or sampling-based LP rounding offers refined practical algorithms when scalability precludes full SDP resolution.
  • Empirical Kernelization for Nonlinear Geometry: The use of empirical kernel feature spaces for sparse kernel learning demonstrates a general principle: explicit geometry-preserving embeddings can enable efficient QCLP computation in otherwise nonparametric models (Peng et al., 2015).

A plausible implication is that as model complexity and scale increase, success in QCLP will increasingly rely on hybrid relaxation, structured approximation, and problem-specific convexification techniques leveraging convex hull, simultaneous diagonalization, and low-discrepancy sampling.


References

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Quadratically Constrained Linear Programming (QCLP).