Sequential Algorithm Portfolios
- Sequential algorithm portfolios are frameworks for dynamically selecting black-box solvers using both static sequencing and adaptive methods.
- They leverage low-level exploratory landscape analysis features and machine learning to improve solver selection on diverse benchmark instances like MA-BBOB.
- Empirical studies reveal that current feature-based selectors struggle with high structural variance, highlighting the need for richer features and adaptive portfolio strategies.
Sequential algorithm portfolios are frameworks for black-box algorithm selection, in which a meta-algorithm dynamically chooses and executes members of a predefined set (the portfolio), either for a fixed sequence of runs or in an adaptive manner. In the context of continuous optimization benchmarking, algorithm portfolios are primarily used to study algorithm selection protocols, generalization capabilities, and the finer-grained transitions in solver performance across complex landscapes. Recent developments, including the MA-BBOB function generator, have enabled precise empirical evaluations of algorithm selectors by providing broad, parametric sets of novel benchmark problems, illuminating the limitations of existing feature-based approaches and guiding future research in automated algorithm selection (Vermetten et al., 2023).
1. Portfolio Construction and Usage in Black-Box Optimization
A sequential algorithm portfolio is composed of multiple black-box solvers (e.g., dCMA-ES, modCMA, DE, modDE, COBYLA (Vermetten et al., 2023)), each with distinct strengths and failure modes across problem landscapes. Portfolio construction typically selects solvers that cover complementary regions of the exploratory landscape analysis (ELA) feature space, maximizing empirical diversity. Portfolios may operate in one of several modes:
- Static sequencing: Solvers are chosen according to a fixed policy (round-robin, best-so-far, etc.).
- Adaptive selection: A meta-algorithm predicts which solver to deploy using problem features, past performance metrics, or landscape statistics.
The primary function of a sequential portfolio in empirical studies is to assess how well feature-based selection schemes generalize beyond their training data, by challenging them with benchmark problems that systematically bridge, distort, or merge existing landscape types (Vermetten et al., 2023).
2. Feature-Based Algorithm Selection and Portfolio Performance
A significant application of sequential portfolios is automated algorithm selection, where candidate solvers are dynamically assigned to problems based on low-level ELA features. These features can include distributions of function values, meta-model fits (linear/quadratic/regression), level-set elongations, curvature and convexity statistics, and various summary statistics over sample sets (Vermetten et al., 2023). Portfolio selectors, commonly trainable via random forests or other learning-based methods, operate on instance landscapes extracted from benchmarking suites such as BBOB.
Empirical evidence from MA-BBOB shows that selectors trained strictly on the canonical BBOB functions exhibit poor generalization when evaluated on affine-combined or randomly shifted landscapes. Specifically, such selectors may underperform even the single best solver (modCMA), exposing overfitting to the suite’s particular statistical artifacts and prompting a reevaluation of feature representativeness (Vermetten et al., 2023).
3. Benchmarking Protocols for Sequential Portfolios
Benchmarking sequential portfolios on continuous domains necessitates carefully crafted protocols to ensure representativeness and reproducibility:
- Benchmark instance generation: Problem instances are parameterized with random translations, rotations, and objective shifts to simulate diverse real-world scenarios. MA-BBOB further generates affine combinations and random optimum placements, producing landscapes that circumvent BBOB’s original instance-generation biases (Vermetten et al., 2023).
- Performance metrics: Area Over the Convergence Curve (AOCC)—typically log-scaled and averaged over multiple runs and instances—is used to summarize solver progress and switching points in the portfolio (Vermetten et al., 2023).
- Training and evaluation splits: Portfolios are empirically validated by training selectors on (a) landscape ELA features or (b) ground-truth weight vectors from the problem generator, then testing generalization to novel instances.
Selectors exploiting explicit weight information outperform generic ELA-based selectors, highlighting the limited expressiveness of current feature sets; instances with shuffled labels may even rival handcrafted portfolio combinations, exposing latent class imbalance issues (Vermetten et al., 2023).
4. Function Generators and Structural Variance in Portfolios
MA-BBOB introduces architectures for generating arbitrarily large sets of continuous black-box functions by means of affine combinations and shifts of canonical BBOB functions. These generators expose and exacerbate the issue of structural variance: algorithm selectors trained only on BBOB fail to capture the richness of novel landscape combinations. Critical design choices, such as uniform random optimum placement and weight-thresholding, ensure that instance distributions diverge substantially from those of BBOB, covering previously unrepresented regions of the ELA feature space (Vermetten et al., 2023).
Portfolio performance transitions smoothly along parameterized “affine paths” between problem types. For example, CMA-ES, DE, and COBYLA exhibit distinct breakpoint behaviors as landscape hardness changes continuously from sphere-like to highly multimodal (Vermetten et al., 2023). These transitions enable fine-grained evaluation of algorithm selectors and illuminate subtle interactions between landscape features and portfolio efficacy.
5. Limitations, Observed Phenomena, and Empirical Findings
Empirical studies on sequential portfolios using MA-BBOB highlight several limitations and phenomena:
- Standard ELA-based selectors fail to generalize beyond BBOB due to unmodeled high-level structural variance and optima placement (Vermetten et al., 2023).
- Heatmap analyses reveal algorithm dominance transitions as problem parameters vary; pure spherical mixtures favor COBYLA, intermediate regimes favor CMA-ES variants, while modular CMA overtakes near component boundaries.
- “Shuffled-label” baselines may outperform naive feature-based selectors, indicating issues with instance class imbalance and raising questions on portfolio design and training protocols (Vermetten et al., 2023).
These insights suggest that richer feature sets, more representative training regimes, and systematic exploration of the problem-instance space are necessary for robust portfolio-based algorithm selection.
6. Research Directions and Portfolio Innovation
Future portfolio research aims to extend function generators with broader transformation types (rotations, scalings), develop new ELA features specifically targeting mixed or shifted landscapes, and quantify the coverage and bias of instance spaces (Vermetten et al., 2023). Sequential portfolios are poised to benefit from high-diversity training data and per-run adaptation protocols, leveraging fine-grained landscape transitions for automated algorithm configuration.
The availability of frameworks such as MA-BBOB (implemented in IOHexperimenter and IOHanalyzer, with reproducibility via published source code and data) supports systematic research into algorithm selection, portfolio composition, and instance-space bias quantification. Sequential portfolios thus remain an essential tool for advancing empirical optimization research and for bridging generalization gaps between benchmark suites and real-world scenarios (Vermetten et al., 2023).