Regular Simplicial Search Method (RSSM)
- RSSM is a derivative-free optimization method that uses regular simplex geometry to systematically explore smooth minimization problems.
- It employs reflection and shrinking steps to maintain simplex regularity, ensuring robust theoretical convergence and complexity guarantees.
- The method's geometric consistency aids in effective partitioning and bounding, making it applicable to high-dimensional search and similarity tasks.
The Regular Simplicial Search Method (RSSM) is a provable, simplex-type derivative-free optimization algorithm that operates via iterative geometric manipulations of regular simplexes in order to solve smooth minimization problems. RSSM incorporates reflection and shrinking steps, strictly maintains regularity of the simplex, and leverages robust theoretical convergence and complexity guarantees that distinguish it from prior heuristic-based methods.
1. The RSSM Algorithm: Structure and Iterative Procedure
RSSM begins by constructing a regular simplex in from an initial center point and simplex radius . The simplex comprises affinely independent vertices, each equidistant from the centroid.
At each iteration, the vertices are sorted by the function value . The vertex with the highest function value is termed the “worst” vertex. RSSM proceeds with a reflection step, replacing the worst vertex by a reflected point computed via
where are the non-worst vertices. This reflection is centered on the average of the better vertices and preserves the regular simplex geometry.
A sufficient descent condition is enforced: where is a descent threshold, is the gradient Lipschitz constant, and is the current simplex radius. If the reflected point fails this criterion, a shrinking step is performed: for , each vertex is updated by
with , thereby reducing the simplex size () while anchoring to the best vertex.
Regularity is preserved during both steps, ensuring that the simplex’s geometric structure (equal edge lengths, positive volume) remains consistent, which is foundational for the method's complexity guarantees.
2. Complexity Analysis and Theoretical Guarantees
RSSM is supported by explicit worst-case iteration bounds for reaching -stationary or -optimal solutions, which depend on convexity assumptions of the objective and geometric properties:
- Nonconvex smooth case ():
This means RSSM will find a point with after no more than this number of iterations.
- Polyak–Łojasiewicz (PL) condition ():
Provided sufficiently small relative to $1/n$, reflecting the stronger geometry.
- Convex and strongly convex cases:
- For convex functions,
where the gap is measured in the average function value across the simplex. - For strongly convex objectives (parameter ), the rate is linear:
with the per-iteration improvement
where .
These results follow from quantifying per-iteration function decrease, bounding ascent from shrinking, and using sharp linear interpolation error bounds.
3. Geometric and Partition Properties of Regular Simplexes
A regular simplex maintains specified volume and diameter relationships. For each simplex in a partition of , a regularity condition is imposed: where is a positive constant, and the diameter. This non-degeneracy ensures the partitioning remains “well-shaped.”
When iteratively partitioning space for search, the maximum number of partitioned simplexes overlapping at any point is bounded by
This exponentially dimension-dependent bound guarantees that branch-and-bound procedures within RSSM only require consideration of a limited number of overlapping regions, aiding in controlling worst-case computational complexity and informing adaptive partitioning strategies.
4. Connectivity, Diameter, and Traversal in Simplicial Complexes
Moore-type bounds extend classical graph connectivity results to regular simplicial complexes. Given a -complex where each -simplex participates in exactly -simplices, the reachable set follows: where is the complex's diameter.
Rearrangement yields a logarithmic lower bound: and for minimum degree (),
Thus, even large complexes exhibit small diameters, which means RSSM and related search methods can navigate the structure efficiently—traversal depth increases only logarithmically with the number of simplices.
5. Relationship to High-Dimensional Simplex Search and Supermetric Spaces
Both RSSM and the method from “High-Dimensional Simplexes for Supermetric Search” (Connor et al., 2017) share commonalities: the construction of index structures from reference points and the use of simplex geometry for search pruning via lower/upper bounds. The latter generalizes the RSSM by exploiting the -point property—allowing arbitrary high-dimensional simplex construction over supermetric spaces (where the space is isometrically embeddable in Hilbert space).
Given references and any object , the mapping in allows tight bounding of distance by the lower/upper bound formula: This geometric embedding delivers significant reductions in metric cost and data storage for similarity search, and with well-chosen dimensionality, tight approximations are achieved—confirmed through experimental speedups of 4.5–8.5 times on benchmark data.
6. Comparative Assessment and Practical Implications
RSSM departs from heuristic predecessors (Nelder-Mead, Spendley et al.) by furnishing explicit worst-case complexity estimates and retaining geometric regularity throughout iterations. This regularity is essential for
- Predictable progress per iteration
- Reliable shrinking and reflection steps
- Ensuring that step-sizes and shape remain well-controlled
RSSM's operation is robust to the degeneracy that could otherwise arise during repeated partitioning. Provided that regularity is monitored (i.e., remains sufficiently large), the method avoids pathological behavior—bounding the intersection number, safeguarding against excessive simplex redundancy, and facilitating rigorous stopping conditions.
These properties render RSSM appropriate as a benchmark for derivative-free optimization methods: practitioners can select parameters (, , , etc.) informed by their direct connection to complexity bounds, and empirically anticipate scaling consistent with the theoretical rate for nonconvex cases and better rates with stronger convexity.
A plausible implication is that maintenance and monitoring of simplex regularity (e.g., via re-meshing when degrades) is advisable in practice, especially in high dimensions.
7. Research Directions and Limitations
Several lines of inquiry remain:
- Extended Simplicial Schemes: Future work is aimed at extending RSSM analysis to variants incorporating expansion/contraction, or relaxation of regularity constraints, which may alter convergence behavior and complexity.
- Noisy Evaluations: Derivation of RSSM complexity under approximate or stochastic evaluations is an open question.
- Adaptive Algorithmic Parameters: Dynamic adjustment of descent and shrinkage parameters by real-time monitoring could further improve efficiency.
- Empirical Validation: While theoretical bounds are sharp, scaling analysis on large problems will clarify RSSM's practical behavior.
- High-Dimensional Complexes: Since intersection number bounds scale exponentially with dimension, maintaining strong regularity is crucial for tractable computational cost in high-dimensional optimization.
RSSM stands as a mathematically principled approach to simplex-type derivative-free optimization, supported by explicit worst-case controls on convergence, partition overlap, and navigability—providing both theoretical foundation and practical guidance for future algorithmic development.