Black-box Optimization via Marginal Means (BOMM)
- BOMM is a statistical method that transforms expensive black-box functions into nearly additive forms, enabling decomposition into coordinate-wise optimization problems.
- It estimates low-dimensional marginal mean functions from sparse data, reducing computational costs compared to full-dimensional surrogates.
- The framework achieves dimension-independent error rates and can be robustified with tail mean corrections to handle moderate non-additivity.
Black-box Optimization via Marginal Means (BOMM) is a statistical framework for global optimization of expensive, high-dimensional black-box functions. Distinct from classical “pick-the-winner” strategies that select the best observed function value, BOMM leverages low-dimensional marginal mean functions—estimable even with sparse data—to construct consistent estimators of the global optimizer. Under the assumption of approximate additivity (possibly after a monotone transformation of the objective), BOMM achieves theoretically justified performance and addresses core difficulties posed by the curse of dimensionality. It finds particular efficacy in scientific and engineering applications where each function evaluation (such as a computer simulation) is computationally costly.
1. Theoretical Foundation and Problem Setting
Consider the optimization of an unknown, expensive-to-evaluate function ,
In BOMM, the key modeling assumption is that after applying a strictly monotone transformation , the function is approximately additive,
where are smooth, univariate functions and is a (small) non-additive residual. The transformation —often estimated from data (e.g., via a Box–Cox transform)—renders complex, nonlinear objectives more nearly additive, facilitating decomposition along coordinate directions.
The critical insight is that, under such a model, the global optimum can, in the additive case, be constructed coordinate-wise as
where the marginal mean function is defined as
with integration over all coordinates other than . BOMM thus reframes the -dimensional optimization as separate one-dimensional minimization problems over the marginal means.
2. Marginal Mean Estimation and the BOMM Estimator
The practical BOMM estimator is computed as follows:
- Data Acquisition. Select design points (often using space-filling designs such as Latin hypercube or uniform random sampling), and evaluate .
- Transformation Fitting. Estimate an appropriate monotone transformation from the observed values.
- Marginal Mean Computation. For each dimension , estimate the marginal mean function
often via empirical or surrogate-based marginalization using available data or fitted surrogate models.
- Minimization. Compute
- Aggregate. Form the BOMM estimate .
In case non-additivity is non-negligible, a diagnostic based on a surrogate model parameter (as in the transformed additive GP surrogate) assesses the degree of interaction and determines whether to use marginal means or switch to tail means (BOMM+), which average only the -lower tail of the function values to focus estimator robustness.
3. Theoretical Guarantees and Error Rates
Under regularity conditions—specifically, -times differentiability of and , monotonicity of , and well-distributed design points—the BOMM estimator is consistent for optimization,
Crucially, this convergence rate does not degrade exponentially with the ambient dimension , in contrast with full surrogate-based methods (e.g., those using Matérn- GPs), which often exhibit rates . This dimension independence is a core appeal of BOMM in high-dimensional settings.
4. Transformed Additive Gaussian Process Surrogates
For practical implementation, BOMM employs a Transformed Approximate Additive Gaussian Process (TAAG) surrogate. The model is specified as
where is an additive GP over coordinates, and models the interaction (non-additivity). The additive kernel is , and the interaction kernel is isotropic or separable. The mixing parameter reflects the empirical degree of non-additivity and is estimated from data.
After model fitting (typically via empirical Bayes), the posterior mean of the transformed function is computed, and the one-dimensional marginal mean functions are efficiently derived. Optimization over each via a grid or direct minimization yields the BOMM estimator.
When the estimated exceeds a diagnostic threshold, indicating strong interaction effects, BOMM+ replaces the average in marginal means with a tail average, robustifying the method in the presence of interaction.
5. Addressing the Curse of Dimensionality
By reducing optimization from a -dimensional to one-dimensional problems, BOMM analytically "tempers" the curse of dimensionality. Exploiting the (possibly transformed) additive structure allows accurate estimation of each with far fewer data than would be needed to model as an arbitrary function on . When is nearly additive after transformation, BOMM achieves error rates unencumbered by .
If the structure is not exactly additive (i.e., when is moderate), BOMM+ using tail mean corrections mitigates estimator degradation, as confirmed by both theoretical arguments and simulation experiments.
6. Empirical Evaluation and Scientific Application
Numerical studies on standard black-box optimization test functions (six-hump camel, wing weight, OTL circuit, piston, and custom test functions with varying interaction strength) demonstrate that:
- Classical pick-the-winner strategies select suboptimal design points in moderate/high dimensions.
- Surrogate-based optimization (SBO) using full-dimensional GP or deep GP surrogates often fails when is small relative to .
- BOMM (and BOMM+) consistently achieves smaller optimality gaps, especially when the ambient dimension is moderate to high.
In practical science, BOMM was applied to the optimization of a neutrino detector design (LEGEND double–beta decay experiment), optimizing over design parameters with expensive simulations. BOMM and BOMM+ identified configurations with improved isotope production suppression compared to classical and surrogate-based methods, substantiating the method’s efficacy in challenging real-world black-box settings (Kim et al., 3 Aug 2025).
7. Practical Considerations and Extensions
BOMM’s efficacy is maximized under these conditions:
- The function of interest is (approximately) additive after monotone transformation.
- The evaluation budget is severely limited (i.e., ).
- Input dimensions can be adequately covered by space-filling designs.
The method’s consistency and convergence rate, which does not scale unfavorably with , enable robust performance as dimensionality increases. The approach can be extended by:
- Diagnostics for non-additivity, triggering robustification via tail means (BOMM+).
- Incorporation into more general surrogate-based optimization frameworks.
- Potential integration with simulation-based and marginal probability-based sampling strategies.
In summary, BOMM offers a theoretically principled, scalable, and empirically validated methodology for high-dimensional, expensive black-box optimization, providing both statistical guarantees and practical computational tractability, especially for modern simulation-driven scientific applications (Kim et al., 3 Aug 2025).