Quadratic Nonconvex Reformulation
- Quadratic Nonconvex Reformulation is a set of methodologies that recast nonconvex quadratic programs into alternative mathematical models using lifting and convex relaxations.
- It employs advanced techniques such as CP, DNN, SDP, and SOC relaxations along with reformulation–linearization to improve computational tractability.
- QNR methods are applied in fields like portfolio optimization, robust control, and network analysis to achieve tighter bounds and faster global convergence.
Quadratic Nonconvex Reformulation (QNR) refers to a broad class of methodologies developed to recast nonconvex quadratic programming, including multi- and single-objective problems with quadratic functions in the objective and/or constraints, into alternative mathematical programs that facilitate stronger relaxations, allow convexification, or otherwise improve the computational tractability of solving these intrinsically hard problems. These techniques exploit methods such as lifting, convex and conic relaxations, decomposition, mixed-integer programming encodings, and advanced polyhedral or duality perspectives to either directly solve or globally approximate nonconvex quadratic programs. QNR now forms a foundational set of tools in modern global optimization, with both theoretical and computational advances across continuous, mixed-integer, and structured sparsity regimes.
1. Fundamental Principles and Lifting Methodologies
The central structural tool in QNR is lifting, which replaces products of primal variables by higher-dimensional representations (often a matrix variable with ). This shift translates the original nonlinear and nonconvex quadratic terms into constraints in an extended space, typically resulting in a problem over the set of rank-one matrices corresponding to feasible .
For example, for minimizing a possibly indefinite quadratic form over box constraints: the problem can be reformulated over the convex hull with the lifted variable (Anstreicher et al., 15 Jan 2025). Full characterization of such convex hulls is tractable only in low dimensions, which motivates advanced relaxation techniques.
The use of completely positive (CP) or doubly nonnegative (DNN) matrix cones transforms the lifted rank-one nonconvexity into convex conic constraints (Bai et al., 2012, Kim et al., 2019, Yildirim, 2020), albeit for large this comes at the cost of significant computational complexity unless further structure is exploited.
2. Convex Relaxations: CP, DNN, SDP, and SOC Approaches
Convex relaxations constitute the canonical means for making nonconvex QPs tractable. Notable techniques include:
- Completely Positive (CP) Relaxation: Formulates the lifted problem over the cone of completely positive matrices, providing tight relaxations when feasible (Bai et al., 2012, Kim et al., 2019). However, membership in the CP cone is NP-hard.
- Doubly Nonnegative (DNN) Relaxation: Approximates the CP cone by intersecting the positive semidefinite (PSD) cone with the nonnegative orthant, yielding a tractable semidefinite program (SDP) (Bai et al., 2012, Kim et al., 2019, Yildirim, 2020). The DNN cone coincides with the CP cone for matrix dimensions up to four, which is exploited in block-clique decomposition schemes.
- SDP Relaxations: Incorporated either directly or in hybrid form with RLT or other valid inequalities (Jiang et al., 2016, Anstreicher et al., 15 Jan 2025). SDP relaxations support additional polyhedral strengthening, e.g., by triangle, RLT, and extended triangle inequalities (ETRIs), as seen in box-constrained QPs (Anstreicher et al., 15 Jan 2025).
- Second-Order Cone (SOC) Relaxations: By decomposing nonconvex quadratic constraints via spectral splitting, SOC reformulations enable the construction of tractable relaxations—particularly effective in sparse or structured problems (Jiang et al., 2016, Dey et al., 25 Aug 2025). Under certain treewidth and sparsity conditions, polynomial-size SOC-representable formulations for the convex hull are obtainable (Dey et al., 25 Aug 2025).
- Valid Inequalities and Cutting Surfaces: Adaptive diagonal perturbations and systematic cut generation strengthen standard relaxations with modest computational overhead and limited additional variables, enhancing practical branch-and-bound performance (Dong, 2014, Anstreicher et al., 15 Jan 2025).
3. Reformulation–Linearization and Polyhedral Approaches
The Reformulation–Linearization Technique (RLT) is key to translating quadratic products into linear forms on lifted variables, enabling polyhedral relaxations:
- RLT Polyhedral Theory: The geometry of the RLT-relaxed feasible set reflects that of the original problem's polyhedron (vertices, faces, recession directions) (Qiu et al., 2023). The exactness of the RLT relaxation can be characterized by existence conditions on lifted variables corresponding to minimal faces of the base feasible set.
- Triangle and Extended Triangle Inequalities: For box-constrained quadratic programs, RLT constraints and triangle inequalities are classically necessary but insufficient for tight convex hull descriptions. Extended triangle inequalities (ETRIs) and their conic strengthenings (SOC inequalities) are derived via disjunctive and polyhedral analysis to close remaining relaxation gaps in higher dimensions (Anstreicher et al., 15 Jan 2025).
- Hierarchy and Dominance of Cuts: There exists a hierarchy among RLT, SOC–RLT, GSRT, and further lifted inequalities, with associated dominance and computational trade-offs, particularly under nonconvex quadratic constraints (Jiang et al., 2016).
4. Mixed-Integer and Discretization-Based Reformulations
QNR encompasses reformulations targeting quadratic integer and mixed-integer programs via:
- Mixed-Binary Convex Quadratic Reformulation: Nonconvex quadratic integer problems are converted to mixed-binary programs with convexified objectives by shifting the Hessian via an optimal diagonal perturbation (solved via SDP) and encoding variables through auxiliary binaries (Xia et al., 2014). This approach yields stronger continuous relaxations, essential for branch-and-bound methods.
- Discrete Variable Encodings and Piecewise Approximations: High-fidelity piecewise linear or sawtooth MIP relaxations (with logarithmic growth in encoding variables per approximation error) enable compact and sharp dual bounds for quadratic constraints and objectives, utilizing Gray code logic for projection sharpness (Beach et al., 2020, Beach et al., 2022).
- MILP Encodings of KKT and Complementarity: Nonconvex QPs are reformulated as MILPs using binary variables and big- logic to encode complementarity, with theoretical guarantees on dual bounds' validity and robustness to dual unboundedness (Xia et al., 2015, Gondzio et al., 2018).
5. Geometric and Decomposition-Based Conic Reformulation
Recent advances provide geometric frameworks for convexifying nonconvex conic programs:
- Face-Intersection Geometric Reformulation: The convex hull of the intersection of a nonconvex cone, a face of its convex hull, and a supporting hyperplane can, under moderate assumptions, be replaced with the intersection of the face and the hyperplane alone, retaining equivalence in optimal value (Kim et al., 2019, Arima et al., 2023).
- Block-Clique Graph Decomposition: Leveraging correlative sparsity, the DNN/CPP reformulations can be decomposed into tree-structured subproblems. When maximal cliques do not exceed four variables, the DNN and CPP cones coincide, ensuring the exactness of the convex relaxation. This yields substantial computational tractability on large structured instances (Kim et al., 2019).
- Condition for Exact Convexification: For a broad class of QCQPs, as in (Arima et al., 2023), necessary and sufficient conditions (e.g., ) determine when an exact convex conic or SDP reformulation is possible.
6. Algorithmic and Computational Perspectives
QNR provides both primal and dual pathway improvements for solvers:
- Primal Heuristics and Branch-and-Bound: Extensions of Frank–Wolfe–based frameworks to nonconvex regimes employ reformulations (e.g., penalization of quadratic constraints, perspective and complementarity reformulations), along with advanced rounding, large neighborhood, and gradient-based search heuristics to efficiently explore integer-feasible vertices and accelerate global optimization (Mexi et al., 2 Aug 2025).
- Improved Relaxations in Modern Solvers: Quadratic nonconvex reformulations that engineer the structure of nonconvexity—by shifting quadratic terms into convex or isolated forms (e.g., via parameterized matrix splitting and SDP) and optimizing corresponding relaxation bounds—substantially accelerate branch-and-bound solvers such as Gurobi and SCIP, aligning their lower bounds with those obtainable in advanced SDP+RLT relaxations (Lu et al., 28 Aug 2025).
- Sharpness, Dual Bounds, and Relaxation Pathologies: Hereditary sharpness of discretization-based relaxations, robustness of MILP dual bounds via proper big- logic, and explicit construction of pathologies where feasible QPs have unbounded DNN relaxations (Yildirim, 2020, Beach et al., 2022) clarify the fine-grained limitations and strengths of various QNR techniques.
- Global Convergence and Krylov Methods: In large-scale or regularized nonconvex problems, projections onto Krylov subspaces and related polynomial approximation techniques yield efficient approximate solutions with provable error bounds, leveraging structure exploited in QNR (Carmon et al., 2018).
7. Practical Impact, Applications, and Limitations
QNR techniques are applicable in numerous domains—portfolio optimization, robust control, machine learning, combinatorial optimization, and network analysis—where nonconvex quadratic models are pervasive. Portfolio selection, for example, benefits from QNR-based exact relaxations capable of producing sparse, interpretable solutions (Bai et al., 2012).
The reformulation approaches enable exact or provably tight global solutions when classic convexification is ineffective (notably for indefinite or dense quadratic forms), and they improve numerical performance even in generic global solvers by providing tighter lower bounds from the outset (Anstreicher et al., 15 Jan 2025, Lu et al., 28 Aug 2025, Mexi et al., 2 Aug 2025). Limitations occur in the exponential growth of variables/constraints under indiscriminate lifting, the computational expense of solving large SDPs, or pathological loss of relaxation tightness (e.g., for DNN relaxations with negative curvature in the recession cone (Yildirim, 2020)).
Conclusion
Quadratic Nonconvex Reformulation (QNR) unifies and extends various global optimization approaches for nonconvex quadratic programs. It encompasses lifting, conic and polyhedral relaxation, advanced decomposition, discretization, mixed-integer programming, and geometric convexification techniques, all aimed at either exactly solving or efficiently approximating the solution of inherently hard quadratic problems. The theoretical advances—exactness conditions, domination hierarchies among relaxations, and structural decompositions—directly translate into improved computational algorithms and solvers, expanding the set of tractable quadratic optimization problems in both academic and applied contexts.