Beyond Smoothed Analysis: Analyzing the Simplex Method by the Book
Abstract: Narrowing the gap between theory and practice is a longstanding goal of the algorithm analysis community. To further progress our understanding of how algorithms work in practice, we propose a new algorithm analysis framework that we call by the book analysis. In contrast to earlier frameworks, by the book analysis not only models an algorithm's input data, but also the algorithm itself. Results from by the book analysis are meant to correspond well with established knowledge of an algorithm's practical behavior, as they are meant to be grounded in observations from implementations, input modeling best practices, and measurements on practical benchmark instances. We apply our framework to the simplex method, an algorithm which is beloved for its excellent performance in practice and notorious for its high running time under worst-case analysis. The simplex method similarly showcased the state of the art framework smoothed analysis (Spielman and Teng, STOC'01). We explain how our framework overcomes several weaknesses of smoothed analysis and we prove that under input scaling assumptions, feasibility tolerances and other design principles used by simplex method implementations, the simplex method indeed attains a polynomial running time.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
Overview: What this paper is about
This paper introduces a new way to study how algorithms really behave in the real world. The authors call it “by the book analysis.” Instead of only looking at the math behind an algorithm’s input, they also model how the algorithm is actually implemented in software—using the same tricks, settings, and safety checks that professional solvers use.
They apply this idea to the simplex method, a popular algorithm for solving linear programs (LPs). LPs are problems where you want to make the “best” choice (like maximizing profit) while obeying rules (constraints), and the simplex method usually solves them very fast in practice. However, old theory sometimes says simplex can be slow. The authors explain why simplex is fast in practice by modeling what LP solvers really do, and they prove that, under these realistic conditions, simplex runs in polynomial time (that means its running time grows reasonably with problem size).
Key Questions the Paper Answers
- Why is the simplex method almost always fast in practice, even though worst-case theory says it can be slow?
- Can we build a theory that matches what LP solvers actually do—like using tolerances, scaling, and tiny random tweaks—to explain and guarantee good performance?
- What parts of real-world LP problems and solver implementations are most important to include in a realistic analysis?
How They Studied the Problem (Methods and Approach)
The authors follow three steps—just like reading a cookbook, choosing ingredients, and then baking:
- Observations (reading “by the book”):
- They studied solver user manuals, scientific papers, open-source code (like HiGHS and Glop), and even talked to developers.
- They looked at real problem datasets (like MIPLIB 2017) to see what typical LPs look like in practice.
- Assumptions (choosing the model based on reality):
- They turned observations into mathematical assumptions that reflect what solvers do. Examples:
- Solvers use feasibility tolerances—tiny amounts of allowed wiggle room—around about 10⁻⁶ or 10⁻⁷.
- Inputs are scaled so numbers aren’t too huge or tiny (to avoid numerical problems).
- The constraint matrix is well-conditioned (not too sensitive), typically with a condition number ≤ 10¹⁰.
- Solvers often add small random perturbations (tiny changes) to bounds or costs to avoid stalling and make progress.
- They turned observations into mathematical assumptions that reflect what solvers do. Examples:
- Proof (analyzing a realistic simplex):
- They analyze a two-phase simplex method that uses the shadow vertex rule (a common probabilistic analysis tool).
- Phase I: carefully adds constraints one-by-one and keeps the solution feasible.
- Phase II: moves along a path of “corner points” (vertices) toward optimality using an auxiliary objective and the actual objective.
- They do not add random noise to the constraint matrix A, so the analysis works with sparse matrices—matching practice.
- They use geometric ideas like the mean width of the feasible region (think: how “wide” the set of allowed solutions is) to quantify progress.
Simple explanations of key terms
- Linear program (LP): Choose values to maximize or minimize something (like profit), following rules (constraints).
- Simplex method: Walks along the edges of the shape formed by constraints (a polyhedron), moving from corner to corner until reaching the best corner.
- Pivot step: One move from one corner to the next.
- Feasibility tolerance: Tiny allowed violation of a rule (example: it’s okay if a constraint is off by 0.000001).
- Condition number: How sensitive solving a system is to small changes; lower is better.
- Perturbation: Tiny changes added on purpose to help the algorithm avoid getting stuck.
- Shadow vertex rule: A way to choose the path through corners using a “shadow” of the feasible shape with a helper objective.
- Mean width: A number measuring how “wide” the feasible set is on average across directions.
What They Found and Why It Matters
- By modeling real solver behavior (tolerances, scaling, and small perturbations to bounds and costs), the simplex method can be proven to run in polynomial time.
- Their analysis works with sparse matrices and does not require adding noise to every entry of the constraint matrix (unlike smoothed analysis), which makes it closer to real practice.
- They show that tiny bound/objective perturbations help produce large “slacks” and “reduced costs,” which means each pivot step makes meaningful progress. This explains the practical wisdom that “more aggressive perturbations” can help speed up simplex.
- With realistic parameter values:
- Feasibility tolerances around 10⁻⁶ to 10⁻⁷,
- Condition numbers up to about 10¹⁰,
- Mean width of feasible sets around 100 (based on measurements on MIPLIB 2017),
- the expected number of pivot steps is a polynomial function of the number of variables and constraints. In simple terms: it grows reasonably with problem size.
- This “by the book” analysis fills gaps left by smoothed analysis:
- Smoothed analysis adds random noise to all inputs (making them dense), which doesn’t match sparse, real LPs.
- Smoothed analysis suggests more noise helps performance, but in practice, low noise and high precision are preferred.
- Small perturbations to constraints can drastically change the feasible region in real LPs—so blindly adding noise is not a realistic model for measurement error.
Implications and Impact
- For practitioners: The paper supports common solver advice—use good scaling, respect tolerances, and allow small perturbations—to get reliable, fast performance.
- For researchers: It shows a path to building theories that match real-world implementations. Instead of pretending algorithms live in a perfect math world, model them “by the book”—with the same settings and design choices used in software.
- For future work: The framework can be refined and extended to other pivot rules or other algorithms. It encourages measuring quantities (like mean width) on real datasets to ground the theory in reality.
- Big picture: This narrows the gap between algorithm theory and practice, giving us explanations and guarantees that align with what engineers and users actually see when they solve LPs.
Knowledge Gaps
Knowledge gaps, limitations, and open questions
The paper leaves the following points unresolved; each item identifies a concrete gap or question that future work could address.
- Extend the analysis beyond the shadow-vertex pivot rule to pivot rules used in modern solvers (e.g., steepest-edge and its approximations, most-negative reduced cost, Devex), and quantify how the bounds change under these rules.
- Replace the sequential Phase I procedure with Phase I strategies actually used in practice (crash starts, primal/dual Phase I, advanced bases) and analyze their impact on the bound.
- Justify or relax the strong assumption that “every square submatrix” of
Ahas bounded condition number; provide verifiable and computable proxies (e.g., basis condition numbers along the algorithm’s path) that suffice for the analysis. - Incorporate floating-point arithmetic explicitly into the proofs (rather than assuming exact feasibility after perturbation), and quantify how rounding, refactorizations, and numerical error affect the pivot-count and correctness guarantees.
- Formalize solver-specific feasibility and optimality tolerance definitions; show robustness of the results under the different tolerance semantics used by Gurobi, MOSEK, HiGHS, and Glop.
- Provide high-probability or tail bounds on pivot steps, not only expectations; characterize variance and worst-case deviations under the proposed perturbation model.
- Align the perturbation distributions with implementation practice: the analysis uses exponentially distributed RHS perturbations and -log-Lipschitz objectives, while codes typically use uniform random perturbations; show equivalence or quantify differences.
- Specify and justify how to set the perturbation magnitude as a function of dimension , constraints , and target tolerances; ensure the prescribed remains appropriate for large-scale or highly sparse problems.
- Quantify the optimality gap (in objective value) introduced by cost and bound perturbations under feasibility/optimality tolerances; provide error bounds on relative to the unperturbed LP.
- Exploit sparsity structurally in the analysis: derive bounds that depend on sparsity patterns (e.g., row/column sparsity, block structure, network topology) rather than only on and .
- Provide an actionable procedure to estimate or bound the mean width of practical LP feasible sets, including computational complexity, sampling rates, and error bars; validate across diverse benchmarks beyond LP relaxations in MIPLIB.
- Clarify the role and practical setting of the parameter in the combined objective : define how is chosen, how it interacts with tolerances, and how it affects the final bound.
- Validate the theoretical pivot-count bounds against empirical measurements (pivot steps and wall-clock time) on standard datasets and solvers with realistic settings (presolve, scaling, pricing, anti-degeneracy).
- Connect pivot-count bounds to time-per-pivot costs (factorization, updates, pricing) to yield end-to-end time complexity predictions; include dependence on matrix density and basis update strategy.
- Model and analyze adaptive, iteration-dependent perturbations and tolerance adjustments (e.g., anti-degeneracy measures, expanding tolerances, minimum step sizes) used by mature codes.
- Study how presolve transformations (aggregations, eliminations, bound strengthening, coefficient tightening) affect the geometric parameters , , , and the bound.
- Replace the intractable global condition-number assumptions with tractable, solver-observable measures (e.g., basis condition numbers, growth factors, pivot stability metrics), and prove bounds in terms of these.
- Provide guarantees that random or structured bound perturbations prevent cycling under the modeled rules, including conditions under which cycling could still occur.
- Analyze Phase I complexity more precisely, including the dependence on the number and order of added constraints, and compare with modern Phase I strategies.
- Assess robustness to ill-scaled problems despite automatic scaling: show how scaling transformations affect , , and , and whether the bounds are scale-invariant or require specific normalization.
- Extend the framework to dual simplex (with cost perturbation) explicitly, matching practical behavior, and prove comparable bounds.
- Examine sensitivity to structured noise and correlations (e.g., network LPs, equality constraints, block-angular structures): does the Chernoff-based slack lower bound survive in non-i.i.d. settings?
- Provide guidance when perturbations are undesirable (e.g., robust optimization, regulated domains): can by-the-book analysis yield guarantees without changing bounds/costs?
- Compare quantitatively with smoothed analysis: demonstrate cases where by-the-book bounds are tighter or more predictive, and identify regimes where smoothed analysis still offers insights.
- Generalize to LPs arising inside MIP branch-and-bound (re-optimizations, warm starts, frequent basis changes), and explain whether , , and pivot-count bounds remain stable across nodes.
- Offer principled criteria for choosing automatic scaling modes and perturbation magnitudes in real solvers that optimize both numerical reliability and theoretical guarantees, grounded in the proposed parameters.
Practical Applications
Immediate Applications
The following applications can be deployed now by leveraging the paper’s “by the book analysis” framework and its concrete findings on scaling, feasibility tolerances, perturbations, sparsity, and measurable parameters (e.g., condition numbers, mean width):
- Solver configuration and tuning guides
- Use case: Provide practitioners with standard operating procedures for setting primal/dual feasibility tolerances, condition-number limits, and perturbation magnitudes proportional to tolerances.
- Sector(s): Software (LP/MIP solvers), operations research, logistics, finance, energy.
- Tools/products/workflows: “Feasibility Tolerance Advisor” embedded in solver GUIs; configuration templates; automatic warnings when basis condition numbers exceed recommended thresholds (e.g., 1010); parameter presets aligned with 1e−6–1e−7 tolerances.
- Assumptions/dependencies: IEEE-754 double precision; solver supports configurable tolerances and perturbations; feasible set is well-scaled; sparse matrices; condition numbers are monitored.
- Pre-solve numerical health audit
- Use case: Automated checks that rows are normalized, coefficients/bounds/objectives are within recommended ranges, and degeneracy risk is mitigated via bound/cost perturbations sized to feasibility tolerances.
- Sector(s): Software, supply chain, manufacturing, telecom, transportation.
- Tools/products/workflows: “Numerical Health Dashboard” that verifies row norms, sparsity, coefficient ranges, and flags large basis condition numbers; suggests unit scaling and bound shifts.
- Assumptions/dependencies: Access to model before solve; ability to compute or estimate condition numbers; adherence to solver tolerance semantics.
- Mean width estimation for runtime forecasting
- Use case: Estimate half the mean width M of the feasible region via randomized objective probes to forecast pivot-count and runtime (using the bound’s dependence on M, d, n, feasTol).
- Sector(s): Software, cloud optimization services, capacity planning for analytics teams.
- Tools/products/workflows: “Mean Width Estimator” plugin that samples random objectives and aggregates normalized optimal values; integrates with schedulers to set time limits, choose simplex vs interior point, or prioritize jobs.
- Assumptions/dependencies: Feasible set scaling (row norms ≈ 1), perturbation size calibrated to tolerance, MIPLIB-like instances with M≈O(102); sampling budget and time limits.
- Perturbation policy to prevent cycling and speed convergence
- Use case: Adopt bound/cost perturbations sized to feasibility tolerances (e.g., 1× to 2×
feasTol) to avoid degeneracy and improve progress, especially in dual simplex. - Sector(s): Software (HiGHS, Glop, Gurobi, MOSEK users), industrial optimization users.
- Tools/products/workflows: “Perturbation Magnitude Calibrator” that sets perturbations based on
optTol/feasTol; regression tests that confirm non-cycling under perturbations; API flags for controlled perturbation behavior. - Assumptions/dependencies: Solver supports pre- and in-iteration perturbations; tolerances are within typical ranges; modeler accepts minimal deviations within tolerance semantics.
- Use case: Adopt bound/cost perturbations sized to feasibility tolerances (e.g., 1× to 2×
- Model formulation best practices aligned with observed solver limits
- Use case: Codify recommendations for coefficient ranges (e.g., 10−7 to 109), RHS/objective/bounds ≤ 104–108, and pruning “near-zero” coefficients to preserve sparsity and numerical stability.
- Sector(s): Education, industry modeling teams (finance, energy, logistics).
- Tools/products/workflows: “Scaling Audit” that checks unit choices, normalizes rows, prunes tiny entries (e.g., <1e−13–1e−9 depending on solver).
- Assumptions/dependencies: Adoption of well-established solver manuals and code behavior; domain experts can adjust units; acceptance of sparse representations.
- Benchmarking and reporting standards for solver evaluation
- Use case: Build benchmark protocols that report sparsity, condition numbers, tolerance settings, perturbation strategies, and estimated mean width—replacing opaque smoothed-analysis-style perturbations.
- Sector(s): Academia, solver vendors, open-source communities.
- Tools/products/workflows: Benchmark metadata schema; “By-the-Book” benchmark suite emphasizing sparse, real-world LPs; reproducibility reports including parameter settings and numerical health metrics.
- Assumptions/dependencies: Community uptake; datasets like MIPLIB; open reporting of solver options and pre-processing steps.
- Runtime and resource planning for production optimization pipelines
- Use case: Use the paper’s polynomial pivot-count bounds to set queue priorities and SLAs for optimization tasks in production (e.g., nightly planning runs).
- Sector(s): Cloud platforms, enterprise analytics.
- Tools/products/workflows: Pre-solve analysis service that estimates pivot steps from d, n, M,
feasTol, condition numbers; resource allocation rules. - Assumptions/dependencies: Availability of quick estimates of parameters; stability of tolerance and scaling settings across runs.
- Educational integration
- Use case: Improve curricula to focus on “by the book” analysis and realistic solver behavior (tolerances, scaling, sparsity, perturbations) rather than solely worst-case or smoothed analysis.
- Sector(s): Education, training for data scientists and OR practitioners.
- Tools/products/workflows: Course modules, hands-on labs that inspect HiGHS/Glop/Gurobi/MOSEK settings and measure mean width; case studies showing polynomial-time behavior under practical assumptions.
- Assumptions/dependencies: Access to solvers; willingness to reframe learning outcomes around practice-informed theory.
- Model governance and reproducibility in regulated environments
- Use case: Require documentation of tolerance settings, scaling decisions, and perturbation policies to ensure replicability and auditability of optimization results.
- Sector(s): Finance, energy, public sector procurement.
- Tools/products/workflows: Governance templates for optimization pipelines; reproducibility checklists including numeric parameters and solver option files.
- Assumptions/dependencies: Organizational policy support; alignment with solver capabilities; emphasis on tolerance-based feasibility definitions.
Long-Term Applications
These applications require further research, scaling, or development—often to replace simplifying assumptions (e.g., shadow vertex rule) with widely used rules, extend to other algorithms, or formalize standards and certifications:
- Pivot rules and Phase I designs that match practice but retain theoretical guarantees
- Use case: Replace shadow vertex with widely used rules (steepest edge, Devex, etc.) and develop proofs under by-the-book assumptions; adapt Phase I beyond sequential algorithm.
- Sector(s): Solver vendors, academia.
- Dependencies: New theory to handle common pivot rules; empirical validation on sparse, real-world LPs.
- Adaptive perturbation schemes with theory-backed performance
- Use case: Develop dynamic perturbation strategies that tune magnitudes based on observed degeneracy, condition numbers, and measured progress.
- Sector(s): Software.
- Dependencies: Real-time metrics, progress potentials; proofs linking perturbations to pivot progress under varying conditions.
- Automatic scaling algorithms that optimize mean width and numerical stability
- Use case: Build auto-scalers that normalize rows, control coefficient ranges, and actively reduce mean width to improve runtime predictability.
- Sector(s): Software.
- Dependencies: Fast estimation of mean width; strategies that preserve model semantics and interpretability; proofs that scaling reduces M without harming solution quality.
- Standardization of numerical reporting and solver metadata
- Use case: Community standards to report tolerances, condition numbers, perturbation policies, and mean width proxies alongside results.
- Sector(s): Academia, vendors, benchmarking organizations.
- Dependencies: Consensus-building; schema design; tooling integrated into solver pipelines.
- Certification of polynomial-time behavior given measured parameters
- Use case: Provide “certificates” that a run meets by-the-book assumptions (scaling, kappa ≤ 1010, tolerances, sparsity) and thus admits a polynomial bound on pivot steps.
- Sector(s): Regulated industries, public procurement.
- Dependencies: Robust parameter measurement; independent verification; accepted certification processes.
- Pre-solve runtime prediction services integrated with MLOps/OptOps
- Use case: Enterprise services that analyze LPs pre-solve, estimate runtime, recommend solver choices and parameter settings, and manage scheduling.
- Sector(s): Cloud platforms, large enterprises.
- Dependencies: Accurate parameter estimation at scale; integration with job schedulers; longitudinal performance data.
- Extending by-the-book analysis beyond simplex
- Use case: Apply the framework to interior-point methods, network flow, SAT/CP solvers, or nonlinear optimization to better align theory with practice.
- Sector(s): Software, academia.
- Dependencies: Identification of practice-critical parameters (e.g., preconditioners, damping, line search tolerances); new proofs.
- Robust modeling guidance reconciling perturbations with domain accuracy
- Use case: Develop principled ways to use tolerance-aligned perturbations without compromising model fidelity, especially in sensitive domains (e.g., energy markets).
- Sector(s): Energy, finance, public policy.
- Dependencies: Domain-specific validation; governance rules distinguishing permissible solver-level perturbations from data-level noise.
- Hardware-aware numerical policies
- Use case: Tailor tolerances and scaling to mixed precision, GPUs, or specialized accelerators; quantify impacts on by-the-book guarantees.
- Sector(s): HPC, cloud.
- Dependencies: Hardware-specific error profiles; adapted condition number thresholds; experimental validation.
- Automated unit selection and data normalization assistants
- Use case: Assist modelers in choosing measurement units to bring coefficients into recommended ranges and reduce condition numbers.
- Sector(s): Industry modeling teams, education.
- Dependencies: Domain ontologies; user acceptance; preservation of interpretability.
- Degeneracy-aware progress metrics and potential functions in production solvers
- Use case: Implement measurable potential functions (objective, auxiliary objective, path-progress) that correlate with guaranteed pivot progress under by-the-book assumptions.
- Sector(s): Software.
- Dependencies: Instrumentation in solvers; mapping between theoretical potentials and code-level metrics.
- Community datasets reflecting by-the-book parameters
- Use case: Curate datasets with metadata on sparsity, condition numbers, tolerances, perturbation policies, and mean width estimates—enabling reproducible, practice-aligned research.
- Sector(s): Academia, open-source communities.
- Dependencies: Data collection and curation; tooling for parameter measurement; broad adoption.
Notes on feasibility and key dependencies across applications:
- Assumptions: Well-scaled LPs (row norms near 1), feasibility tolerances of order 10−6–10−7, limited basis condition numbers (≤ 1010), acceptance of tolerance-based feasibility/complementary slackness, sparse matrices, and calibrated perturbations.
- Dependencies: Ability to estimate mean width (via randomized objective sampling), compute or bound condition numbers, log solver metrics (pivot steps, degeneracy events), and integrate pre-solve audits into workflows.
- Caveats: The analyzed method uses the shadow vertex rule (less common in production); some guarantees hinge on independence assumptions and perturbed RHS/objectives; measuring mean width at scale can be time- and compute-intensive, requiring practical proxies or sampling strategies.
Glossary
- auxiliary objective: An additional objective function used to guide the shadow-vertex path during optimization. "with auxiliary objective "
- average-case analysis: A framework that studies expected algorithm performance over a specified input distribution. "average-case analysis is the theoretician's first attempt to provide a better understanding of the running time."
- basic feasible solution: A corner (vertex) of the feasible region corresponding to a basis, from which the simplex method starts or iterates. "A simplex method first determines a basic feasible solution"
- big-M: A large constant used in modeling (e.g., constraints) to enforce logical conditions; problematic when too large numerically. "The default big-M value is "
- bound perturbations: Intentional small changes to variable bounds (or RHS/objective) to improve numerical behavior, avoid degeneracy, or speed up the simplex method. "One consequence of bound perturbations is that we avoid degeneracy and consequently the simplex method cannot cycle."
- by the book analysis: A proposed framework that models both input data and algorithm implementation details to align theory with practice. "we propose a new algorithm analysis framework that we call by the book analysis."
- Chernoff bound: A probabilistic tail bound used to control deviations of sums of random variables. "The lower bound on the slacks is derived using a Chernoff bound."
- complementary slackness: Optimality conditions linking primal and dual solutions, relating slackness in constraints to dual variable values. "which satisfies complementary slackness up to dual feasibility tolerance "
- condition number: A measure of sensitivity of a matrix or linear system to perturbations; large values indicate potential numerical instability. "the condition number of linear systems is assumed to be no greater than ."
- dual simplex method: A variant of the simplex algorithm that maintains dual feasibility and iterates to primal feasibility/optimality. "before starting the dual simplex method."
- equilibration: A scaling technique aiming to balance row/column magnitudes to improve numerical properties of the LP. "the simplex method incorporates two scaling methods, one using equilibration and one based on maximum value"
- extended matrix: The matrix formed by augmenting constraint coefficients with the right-hand side (e.g., ). "the rows of the extended matrix each have Euclidean norm at most $1$."
- Farkas certificate: A certificate (from Farkas’ lemma) indicating infeasibility of a linear system via a particular dual solution. "most important constraints are those whose relative values in the Farkas certificate are at least ."
- feasibility tolerance: A user-controlled threshold allowing small violations of constraints to be treated as feasible. "There are two of these, a primal feasibility tolerance and a dual feasibility tolerance, also called the optimality tolerance."
- Haar measure: The canonical uniform measure on a group (or sphere) used to define uniform randomness in geometric analysis. "with respect to the Haar measure on the unit sphere."
- IEEE 754 double precision floating point: The standard 64-bit floating-point format with 1 sign bit, 11 exponent bits, and 53 significand bits. "In an IEEE 754 double precision floating point number, there is 1 bit for the sign, 11 bits for the exponent, and 53 bits for the significand"
- ill-conditioned: Describes a matrix/system with a large condition number that leads to unstable numerical behavior. "potentially an ill-conditioned one"
- LP relaxation: The linear programming relaxation of an integer/mixed-integer problem, obtained by dropping integrality constraints. "over the LP relaxation using Gurobi 12.0.1."
- mean width: A geometric quantity measuring the average distance across a convex set in random directions. "the mean width (a type of mixed volume of a convex body, see, e.g., \cite{convexbodies-brunnminkowski}) of the feasible set naturally appeared in our proofs."
- mixed volume: A convex-geometric measure generalizing volume to multiple bodies, related here to mean width. "mean width (a type of mixed volume of a convex body, see, e.g., \cite{convexbodies-brunnminkowski})"
- parametric rule: A pivot rule equivalent to the shadow vertex rule, guiding pivots along a path defined by two objectives. "the shadow vertex rule, also known as the parametric rule"
- Phase I procedure: The initial stage of the simplex method used to find a basic feasible solution. "A simplex method first determines a basic feasible solution using a Phase I procedure"
- Phase II: The optimization stage of the simplex method that improves the objective starting from a feasible basis. "Phase II starts when a basic feasible solution is found"
- pivot rule: A strategy dictating which entering and leaving variables are chosen at each simplex iteration. "The choice for what basic feasible solution to move to is governed by a pivot rule."
- pivot step: A single iteration of the simplex method that moves to an adjacent basic feasible solution. "Such a move is called a pivot step, and the number of pivot steps is a proxy for the running time."
- policy iteration algorithm: An algorithm for solving Markov Decision Processes, related in analysis to simplex-style bounds. "the policy iteration algorithm for Markov Decision Processes with bounded discount rate"
- primal simplex method: A variant of the simplex algorithm that maintains primal feasibility while improving the objective. "In the primal simplex method, when a bound is found to be infeasible, it is shifted"
- random perturbation: Random noise added to data (e.g., objective or bounds) to improve algorithmic behavior or avoid pathological cases. "random perturbation before starting the simplex method"
- reduced costs: The marginal change in objective per unit increase of a nonbasic variable; central to pivot decisions. "the most negative reduced cost rule"
- robust optimization: A modeling framework accounting for uncertainty, focusing on constraint satisfaction under perturbations. "observed on practical LPs in the robust optimization literature"
- semi-random shadow vertex path: A pivot path guided by partly random objectives used in analysis to bound path length. "the length of the semi-random shadow vertex path."
- shadow vertex method: A simplex variant that follows the edge path defined by two objectives (auxiliary and target). "using the shadow vertex method to obtain bases"
- smoothed analysis: A framework assessing algorithm performance under slight random perturbations of inputs. "In this section we review the smoothed analysis of the simplex method."
- subdeterminant: The determinant of a square submatrix; bounds on these relate to worst-case performance analyses. "bounded subdeterminants"
- weakly polynomial-time algorithm: An algorithm whose running time is polynomial in certain parameters (e.g., dimensions), but may depend on numeric magnitudes rather than input bit-length. "to find a weakly polynomial-time algorithm for linear programming."
Collections
Sign up for free to add this paper to one or more collections.