Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Combining Online Learning and Offline Learning for Contextual Bandits with Deficient Support (2107.11533v1)

Published 24 Jul 2021 in stat.ML and cs.LG

Abstract: We address policy learning with logged data in contextual bandits. Current offline-policy learning algorithms are mostly based on inverse propensity score (IPS) weighting requiring the logging policy to have \emph{full support} i.e. a non-zero probability for any context/action of the evaluation policy. However, many real-world systems do not guarantee such logging policies, especially when the action space is large and many actions have poor or missing rewards. With such \emph{support deficiency}, the offline learning fails to find optimal policies. We propose a novel approach that uses a hybrid of offline learning with online exploration. The online exploration is used to explore unsupported actions in the logged data whilst offline learning is used to exploit supported actions from the logged data avoiding unnecessary explorations. Our approach determines an optimal policy with theoretical guarantees using the minimal number of online explorations. We demonstrate our algorithms' effectiveness empirically on a diverse collection of datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hung Tran-The (10 papers)
  2. Sunil Gupta (78 papers)
  3. Thanh Nguyen-Tang (17 papers)
  4. Santu Rana (68 papers)
  5. Svetha Venkatesh (160 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.