Optimal ensembling strategies for boosting methods

Determine the best ensembling strategy for combining multiple base models h_k(x) into an aggregate predictor F_K(w,x) = Σ_{k=1}^K w_k h_k(x), including principled weight selection, model inclusion criteria, and regularization schemes that optimally trade off accuracy and complexity.

Background

The paper contrasts classical ensembling methods such as AdaBoost and XGBoost with a quantum‑inspired formulation (Qboost) that maps the ensembling objective to an Ising model for optimization via quantum annealing or QAOA.

Despite the wide use of ensembling in practice, the text explicitly notes that the overarching strategy—how to select models and weights to form the best ensemble—remains an open problem. This underscores the need for theory and algorithms that go beyond heuristics and provide optimality guarantees.

References

What is the best ensembling method strategy is still an open problem.

Quantum machine learning -- lecture notes (2512.05151 - Žunkovič, 3 Dec 2025) in Section: Quantised classical models, Subsection: Qboost