Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DSOS and SDSOS Optimization: More Tractable Alternatives to Sum of Squares and Semidefinite Optimization (1706.02586v3)

Published 8 Jun 2017 in math.OC, cs.DS, cs.SY, and stat.ML

Abstract: In recent years, optimization theory has been greatly impacted by the advent of sum of squares (SOS) optimization. The reliance of this technique on large-scale semidefinite programs however, has limited the scale of problems to which it can be applied. In this paper, we introduce DSOS and SDSOS optimization as linear programming and second-order cone programming-based alternatives to sum of squares optimization that allow one to trade off computation time with solution quality. These are optimization problems over certain subsets of sum of squares polynomials (or equivalently subsets of positive semidefinite matrices), which can be of interest in general applications of semidefinite programming where scalability is a limitation. We show that some basic theorems from SOS optimization which rely on results from real algebraic geometry are still valid for DSOS and SDSOS optimization. Furthermore, we show with numerical experiments from diverse application areas---polynomial optimization, statistics and machine learning, derivative pricing, and control theory---that with reasonable tradeoffs in accuracy, we can handle problems at scales that are currently significantly beyond the reach of traditional sum of squares approaches. Finally, we provide a review of recent techniques that bridge the gap between our DSOS/SDSOS approach and the SOS approach at the expense of additional running time. The Supplementary Material of the paper introduces an accompanying MATLAB package for DSOS and SDSOS optimization.

Citations (209)

Summary

  • The paper introduces DSOS and SDSOS optimization as efficient LP and SOCP alternatives to traditional SOS programming, enhancing scalability.
  • It demonstrates these methods' practical value through extensive numerical experiments in control theory, finance, and robotics.
  • The work provides theoretical guarantees and a MATLAB iSOS package, facilitating broader adoption in solving complex polynomial optimization problems.

Essay on "DSOS and SDSOS Optimization: More Tractable Alternatives to Sum of Squares and Semidefinite Optimization"

Introduction

The paper by Amir Ali Ahmadi and Anirudha Majumdar introduces innovative approaches to optimization known as DSOS (diagonally-dominant-sum-of-squares) and SDSOS (scaled-diagonally-dominant-sum-of-squares) optimization. This work addresses the limitations of sum of squares (SOS) programming in handling large-scale semidefinite programs (SDPs) by proposing linear programming (LP) and second-order cone programming (SOCP) based alternatives. The primary objective is to provide more tractable means for polynomial nonnegativity testing, which is prevalent in many applications but becomes computationally burdensome with SOS due to the large matrix dimensions involved in SDPs.

Summary of Contributions

  1. Introduction of DSOS and SDSOS: The authors define DSOS and SDSOS polynomials as subsets of SOS polynomials. DSOS polynomials can be optimized using LPs, and SDSOS polynomials using SOCPs. Their approach provides a trade-off between computational efficiency and the conservativeness of the nonnegativity condition.
  2. Scalability and Numerical Experiments: Extensive numerical experiments demonstrate the scalability of DSOS and SDSOS optimization in several application areas. These include polynomial optimization, copositive programming, convex regression, options pricing, sparse principal component analysis, and control theory.
  3. Asymptotic Guarantees: The paper proves that for even forms, POS surjective representations exist within the DSOS/SDSOS framework. This provides theoretical assurance that broader classes of nonnegative polynomials can be addressed using these approaches.
  4. Comparison with Existing LP and SDP Hierarchies: By contrasting DSOS/SDSOS methods with existing hierarchies like Polya’s LP hierarchy, the authors highlight the improved bounds and computational benefits of their method. The connection to traditional SDP techniques is maintained, ensuring theoretical rigor.
  5. Software Package iSOS: The development of a MATLAB package that automates the formulation of DSOS and SDSOS problems is a significant contribution, facilitating broader adoption and experimentation with these methods.

Implications and Applications

The introduction of DSOS and SDSOS optimization is significant as it opens up possibilities for efficiently solving optimization problems previously deemed infeasible due to scalability issues. Particularly in control theory, finance, and machine learning, the ability to handle larger problem instances with ease extends the applicability of polynomial optimization techniques. For instance, the region of attraction computations in robotics can now be more efficiently approximated, offering practical benefits in real-time control systems.

Furthermore, the implications extend to theoretical advancements in optimization, particularly in the convex approximation of nonconvex problems. By enabling the use of LP and SOCP frameworks, the paper capitalizes on existing mature solver technologies, enhancing the practicality of polynomial optimization.

Future Directions

The work suggests future research in refining the trade-offs between solution quality and computational efficiency. Developing adaptive methodologies that switch between DSOS, SDSOS, and SOS optimization based on problem-specific criteria could further enhance efficiency. Additionally, the integration of these methods with other problem structures like sparsity and symmetry remains an open question that could lead to more comprehensive optimization frameworks.

In summary, the paper's contributions to DSOS and SDSOS optimization represent a valuable addition to optimization literature by making polynomial optimization more accessible through tractable alternatives. This lays the groundwork for future explorations into scalable and efficient approaches in both theoretical and practical domains of optimization.