Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
131 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rapid Convergence of First-Order Numerical Algorithms via Adaptive Conditioning (2103.00736v1)

Published 1 Mar 2021 in math.OC

Abstract: This paper is an attempt to remedy the problem of slow convergence for first-order numerical algorithms by proposing an adaptive conditioning heuristic. First, we propose a parallelizable numerical algorithm that is capable of solving large-scale conic optimization problems on distributed platforms such as {graphics processing unit} with orders-of-magnitude time improvement. Proof of global convergence is provided for the proposed algorithm. We argue that on the contrary to common belief, the condition number of the data matrix is not a reliable predictor of convergence speed. In light of this observation, an adaptive conditioning heuristic is proposed which enables higher accuracy compared to other first-order numerical algorithms. Numerical experiments on a wide range of large-scale linear programming and second-order cone programming problems demonstrate the scalability and computational advantages of the proposed algorithm compared to commercial and open-source state-of-the-art solvers.

Summary

We haven't generated a summary for this paper yet.