Contextual Linear Optimization with Bandit Feedback (2405.16564v2)
Abstract: Contextual linear optimization (CLO) uses predictive contextual features to reduce uncertainty in random cost coefficients and thereby improve average-cost performance. An example is the stochastic shortest path problem with random edge costs (e.g., traffic) and contextual features (e.g., lagged traffic, weather). Existing work on CLO assumes the data has fully observed cost coefficient vectors, but in many applications, we can only see the realized cost of a historical decision, that is, just one projection of the random cost coefficient vector, to which we refer as bandit feedback. We study a class of offline learning algorithms for CLO with bandit feedback, which we term induced empirical risk minimization (IERM), where we fit a predictive model to directly optimize the downstream performance of the policy it induces. We show a fast-rate regret bound for IERM that allows for misspecified model classes and flexible choices of the optimization estimate, and we develop computationally tractable surrogate losses. A byproduct of our theory of independent interest is fast-rate regret bound for IERM with full feedback and misspecified policy class. We compare the performance of different modeling choices numerically using a stochastic shortest path example and provide practical insights from the empirical results.
- Athey S, Wager S (2021) Policy learning with observational data. Econometrica 89(1):133–161.
- Audibert JY, Tsybakov AB (2007) Fast learning rates for plug-in classifiers. The Annals of statistics 35(2):608–633.
- Elmachtoub AN, Grigas P (2022) Smart “predict, then optimize”. Management Science 68(1):9–26.
- Foster DJ, Syrgkanis V (2023) Orthogonal statistical learning. The Annals of Statistics 51(3):879–908.
- Huang M, Gupta V (2024) Learning best-in-class policies for the predict-then-optimize framework. arXiv preprint arXiv:2402.03256 .
- Kallus N (2017) Recursive partitioning for personalization using observational data. Proceedings of the 34th International Conference on Machine Learning-Volume 70, 1789–1798 (JMLR. org).
- Kallus N (2018) Balanced policy evaluation and learning. Advances in Neural Information Processing Systems, 8895–8906.
- Kallus N (2021) More efficient policy learning via optimal retargeting. Journal of the American Statistical Association 116(534):646–658.
- Kallus N, Uehara M (2020) Doubly robust off-policy value and gradient estimation for deterministic policies. Advances in Neural Information Processing Systems 33:10420–10430.
- Kitagawa T, Tetenov A (2018) Who should be treated? empirical welfare maximization methods for treatment choice. Econometrica 86(2):591–616.
- Lattimore T, Szepesvári C (2020) Bandit algorithms (Cambridge University Press).
- Swaminathan A, Joachims T (2015a) Batch learning from logged bandit feedback through counterfactual risk minimization. The Journal of Machine Learning Research 16(1):1731–1755.
- Swaminathan A, Joachims T (2015c) The self-normalized estimator for counterfactual learning. advances in neural information processing systems 28.
- Tsybakov AB (2004) Optimal aggregation of classifiers in statistical learning. The Annals of Statistics 32(1):135–166.
- Wainwright MJ (2019) High-dimensional statistics: A non-asymptotic viewpoint, volume 48 (Cambridge University Press).
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.