Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
117 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Scalable Shared-Memory Parallel Simplex for Large-Scale Linear Programming (1804.04737v2)

Published 12 Apr 2018 in cs.DC and cs.MS

Abstract: The Simplex tableau has been broadly used and investigated in the industry and academia. With the advent of the big data era, ever larger problems are posed to be solved in ever larger machines whose architecture type did not exist in the conception of this algorithm. In this paper, we present a shared-memory parallel implementation of the Simplex tableau algorithm for dense large-scale Linear Programming (LP) problems for use in modern multi-core architectures. We present the general scheme and explain the strategies taken to parallelize each step of the standard simplex algorithm, emphasizing the solutions found to solve performance bottlenecks. We analyzed the speedup and the parallel efficiency for the proposed implementation relative to the standard Simplex algorithm using a shared-memory system with 64 processing cores. The experiments were performed for several different problems, with up to 8192 variables and constraints, in their primal and dual formulations. The results show that the performance is mostly much better when we use the formulation with more variables than inequality constraints. Also, they show that the parallelization strategies applied to avoid bottlenecks lead the implementation to scale well with the problem size and the core count up to a certain limit of problem size. Further analysis showed that this scaling limit was an effect of resource limitation. Even though, our implementation was able to reach speedups in the order of 19x.

Citations (1)

Summary

We haven't generated a summary for this paper yet.