Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

cuHALLaR: A GPU Accelerated Low-Rank Augmented Lagrangian Method for Large-Scale Semidefinite Programming (2505.13719v1)

Published 19 May 2025 in math.OC

Abstract: This paper introduces cuHALLaR, a GPU-accelerated implementation of the HALLaR method proposed in Monteiro et al. 2024 for solving large-scale semidefinite programming (SDP) problems. We demonstrate how our Julia-based implementation efficiently uses GPU parallelism through optimization of simple, but key, operations, including linear maps, adjoints, and gradient evaluations. Extensive numerical experiments across three SDP problem classes, i.e., maximum stable set, matrix completion, and phase retrieval show significant performance improvements over both CPU implementations and existing GPU-based solvers. For the largest instances, cuHALLaR achieves speedups of 30-140x on matrix completion problems, up to 135x on maximum stable set problems for Hamming graphs with 8.4 million vertices, and 15-47x on phase retrieval problems with dimensions up to 3.2 million. Our approach efficiently handles massive problems with dimensions up to (n,m) equal to (8 million, 300 million) with high precision, solving matrix completion instances with over 8 million rows and columns in just 142 seconds. These results establish cuHALLaR as a very promising GPU-based method for solving large-scale semidefinite programs.

Summary

We haven't generated a summary for this paper yet.