Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
117 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parallelizing the dual revised simplex method (1503.01889v1)

Published 6 Mar 2015 in math.OC and cs.NA

Abstract: This paper introduces the design and implementation of two parallel dual simplex solvers for general large scale sparse linear programming problems. One approach, called PAMI, extends a relatively unknown pivoting strategy called suboptimization and exploits parallelism across multiple iterations. The other, called SIP, exploits purely single iteration parallelism by overlapping computational components when possible. Computational results show that the performance of PAMI is superior to that of the leading open-source simplex solver, and that SIP complements PAMI in achieving speedup when PAMI results in slowdown. One of the authors has implemented the techniques underlying PAMI within the FICO Xpress simplex solver and this paper presents computational results demonstrating their value. This performance increase is sufficiently valuable for the achievement to be used as the basis of promotional material by FICO. In developing the first parallel revised simplex solver of general utility and commercial importance, this work represents a significant achievement in computational optimization.

Citations (191)

Summary

Overview of Parallelizing the Dual Revised Simplex Method

The research paper, "Parallelizing the Dual Revised Simplex Method" by Q. Huangfu and J. A. J. Hall, presents noteworthy advancements in computational optimization through the design and implementation of parallel dual simplex solvers for large-scale sparse linear programming (LP) problems. The dual revised simplex method, a variant of the simplex algorithm, has been traditionally favored for its efficiency in exploiting (hyper-)sparsity, especially when solving families of related LP problems. However, its extension to parallel computing has been limited due to perceived inefficiencies. This paper addresses these challenges through two complementary parallel approaches, PAMI and SIP, both of which exhibit distinct strategies and implications.

Dual Revised Simplex Method and Its Parallelization Challenges

The dual revised simplex method, renowned for its implementation efficiency in sparse LP problems, includes advanced algorithmic techniques like dual steepest-edge (DSE) pricing and the bound-flipping ratio test (BFRT), which significantly enhance performance. These improvements, primarily introduced in the 1990s, have retained the relevance of the dual simplex method despite the competitive alternative of interior-point methods. Standard parallel approaches in computing, such as those demonstrated by the standard simplex method, often fail to translate effectively to the revised dual simplex method, largely due to the unpredictable nature of scalability and numerical stability concerns when dealing with sparse linear systems.

Contribution: PAMI and SIP Approaches

  1. PAMI (Parallel Algorithm with Multiple Iterations):
    • PAMI leverages a pivoting strategy called suboptimization, characterized by its major-minor iteration framework to achieve parallelism. It capitalizes on considering multiple candidate leaving variables concurrently across iterations.
    • Innovations within PAMI include a task parallelization of forward and backward transformations (FTRAN and BTRAN) and updates to dual variables and weights. These contribute to load balancing across processors, enhancing computational performance.
    • It demonstrates superior speedup, significantly outperforming leading open-source solvers, where tasks like sparse matrix-vector multiplications benefit immensely from parallel execution.
  2. SIP (Single Iteration Parallelism):
    • SIP methodology focuses on parallelizing computational components within a single iteration. By overlapping independent operations where feasible, it exploits available task parallelism within parts of a simple iteration.
    • Although SIP offers modest gains compared to PAMI, its straightforward methodology proves advantageous on specific LP problems where PAMI could induce a slowdown due to its complexity.

Implications and Prospects

The practical implications of these developments extend beyond academia, providing Xpress, a commercial solver, with a performance boost that aligns with the benchmarks of leading solvers like Cplex. The research underlines the substantial impact of parallelism in computational speed while preserving, or even improving, iteration quality.

From a theoretical standpoint, this work reshapes the perception of parallelizing revised simplex methods by demonstrating effective strategies that circumvent traditional pitfalls. Future developments in AI and computational optimization could leverage such approaches, applying them in diversely parallel computing environments or adapting them to emerging optimization problems within machine learning frameworks.

Conclusion

The work of Huangfu and Hall bridges a crucial gap in parallelizing a method persistently essential for LP problems. Their exploration into PAMI and SIP offers insights into effectively scaling the dual revised simplex method across multi-core architectures. As computational needs continue to grow in complexity and scale, such parallel optimization advancements will inevitably influence methodologies in both academic research and industrial applications, carving a path for more efficient and resourceful computational practices.