Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 94 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 97 tok/s Pro
Kimi K2 187 tok/s Pro
GPT OSS 120B 470 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Modified Interior-Point Method for Large-and-Sparse Low-Rank Semidefinite Programs (1703.10973v2)

Published 31 Mar 2017 in math.OC and math.NA

Abstract: Semidefinite programs (SDPs) are powerful theoretical tools that have been studied for over two decades, but their practical use remains limited due to computational difficulties in solving large-scale, realistic-sized problems. In this paper, we describe a modified interior-point method for the efficient solution of large-and-sparse low-rank SDPs, which finds applications in graph theory, approximation theory, control theory, sum-of-squares, etc. Given that the problem data is large-and-sparse, conjugate gradients (CG) can be used to avoid forming, storing, and factoring the large and fully-dense interior-point Hessian matrix, but the resulting convergence rate is usually slow due to ill-conditioning. Our central insight is that, for a rank-$k$, size-$n$ SDP, the Hessian matrix is ill-conditioned only due to a rank-$nk$ perturbation, which can be explicitly computed using a size-$n$ eigendecomposition. We construct a preconditioner to "correct" the low-rank perturbation, thereby allowing preconditioned CG to solve the Hessian equation in a few tens of iterations. This modification is incorporated within SeDuMi, and used to reduce the solution time and memory requirements of large-scale matrix-completion problems by several orders of magnitude.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube