Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Faster Parallel Solver for Positive Linear Programs via Dynamically-Bucketed Selective Coordinate Descent (1511.06468v1)

Published 20 Nov 2015 in cs.DS and cs.NA

Abstract: We provide improved parallel approximation algorithms for the important class of packing and covering linear programs. In particular, we present new parallel $\epsilon$-approximate packing and covering solvers which run in $\tilde{O}(1/\epsilon2)$ expected time, i.e., in expectation they take $\tilde{O}(1/\epsilon2)$ iterations and they do $\tilde{O}(N/\epsilon2)$ total work, where $N$ is the size of the constraint matrix and $\epsilon$ is the error parameter, and where the $\tilde{O}$ hides logarithmic factors. To achieve our improvement, we introduce an algorithmic technique of broader interest: dynamically-bucketed selective coordinate descent (DB-SCD). At each step of the iterative optimization algorithm, the DB-SCD method dynamically buckets the coordinates of the gradient into those of roughly equal magnitude, and it updates all the coordinates in one of the buckets. This dynamically-bucketed updating permits us to take steps along several coordinates with similar-sized gradients, thereby permitting more appropriate step sizes at each step of the algorithm. In particular, this technique allows us to use in a straightforward manner the recent analysis from the breakthrough results of Allen-Zhu and Orecchia [2] to achieve our still-further improved bounds. More generally, this method addresses "interference" among coordinates, by which we mean the impact of the update of one coordinate on the gradients of other coordinates. Such interference is a core issue in parallelizing optimization routines that rely on smoothness properties. Since our DB-SCD method reduces interference via updating a selective subset of variables at each iteration, we expect it may also have more general applicability in optimization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Di Wang (408 papers)
  2. Michael Mahoney (18 papers)
  3. Nishanth Mohan (1 paper)
  4. Satish Rao (13 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.