2000 character limit reached
Gradient boosting for convex cone predict and optimize problems (2204.06895v2)
Published 14 Apr 2022 in cs.LG, math.OC, and stat.ML
Abstract: Prediction models are typically optimized independently from decision optimization. A smart predict then optimize (SPO) framework optimizes prediction models to minimize downstream decision regret. In this paper we present dboost, the first general purpose implementation of smart gradient boosting for `predict, then optimize' problems. The framework supports convex quadratic cone programming and gradient boosting is performed by implicit differentiation of a custom fixed-point mapping. Experiments comparing with state-of-the-art SPO methods show that dboost can further reduce out-of-sample decision regret.