Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Off-Policy Evaluation for Large Action Spaces via Embeddings (2202.06317v2)

Published 13 Feb 2022 in cs.LG, cs.AI, and stat.ML

Abstract: Off-policy evaluation (OPE) in contextual bandits has seen rapid adoption in real-world systems, since it enables offline evaluation of new policies using only historic log data. Unfortunately, when the number of actions is large, existing OPE estimators -- most of which are based on inverse propensity score weighting -- degrade severely and can suffer from extreme bias and variance. This foils the use of OPE in many applications from recommender systems to LLMs. To overcome this issue, we propose a new OPE estimator that leverages marginalized importance weights when action embeddings provide structure in the action space. We characterize the bias, variance, and mean squared error of the proposed estimator and analyze the conditions under which the action embedding provides statistical benefits over conventional estimators. In addition to the theoretical analysis, we find that the empirical performance improvement can be substantial, enabling reliable OPE even when existing estimators collapse due to a large number of actions.

Citations (38)

Summary

  • The paper presents the MIPS estimator, which leverages action embeddings to reduce bias and variance in off-policy evaluation for expansive action spaces.
  • It reshapes importance weighting by marginalizing over the embedding space, maintaining unbiased estimates even with support deficiencies.
  • Empirical results on synthetic and real-world datasets highlight MIPS's robust performance improvements over traditional IPS-based methods.

Off-Policy Evaluation for Large Action Spaces via Embeddings

This paper presents a methodological advancement in off-policy evaluation (OPE) for contextual bandits where the action space is expansive. The prevalent challenge addressed is the high bias and variance associated with existing OPE estimators that primarily rely on inverse propensity score (IPS) weighting when the number of actions is substantial. This research proposes the Marginalized IPS (MIPS) estimator, leveraging action embeddings to provide structure in the action space, thereby improving evaluation accuracy.

Core Contributions

The authors identify two critical limitations in conventional IPS-based estimators when applied to large action spaces: the impractical variance caused by the wide range of importance weights and the high bias introduced by support deficiencies. They propose using additional information in the form of action embeddings, which are assumed to mediate every possible effect of an action on the reward.

The MIPS estimator introduces marginalized importance weights, calculated over the action embedding space rather than the action space itself. This shift enables MIPS to maintain unbiasedness even when the action space lacks common support between logging and target policies, provided there is sufficient common support within the embedding space.

Theoretical Insights

The paper rigorously analyzes the conditions under which MIPS can outperform traditional estimators. Key theoretical insights include:

  • Unbiased estimation: MIPS remains unbiased under common embedding support and no direct action effect assumptions.
  • Variance reduction: The paper demonstrates that MIPS consistently achieves lower variance compared to conventional IPS, particularly advantageous in settings with numerous possible actions.
  • Bias-variance trade-off: The research explores the impact of embedding dimension quality on the estimator's bias and variance, noting that strategic selection of embedding dimensions can optimize mean squared error (MSE) by intentionally introducing a controlled bias.

Empirical Evaluation

Empirical evaluations are conducted using both synthetic data and real-world data from an online fashion platform. The experiments on synthetic datasets exhibit a substantial MSE improvement by MIPS over existing OPE estimators, particularly as the number of actions increases. In real-world scenarios, MIPS demonstrates robust performance enhancements, showcasing its applicability in practical settings.

Practical and Theoretical Implications

The shift to marginalized importance weights using embeddings introduces several implications:

  • Practical benefits: MIPS extends the applicability of OPE to environments with expansive action spaces, mitigating the variance and bias issues that have long constrained traditional approaches.
  • Theoretical implications: The deployment of action embeddings aligns with emerging approaches in AI that utilize structured representations to manage complexity and reduce sample inefficiencies.

Future Directions

The paper sets the stage for further exploration into how action embeddings can be effectively optimized or learned to enhance the performance of MIPS. Areas for future research include refining the estimation of marginal importance weights and extending the framework to reinforcement learning settings where action spaces continue to grow.

In conclusion, this work provides significant insights into the use of action embeddings for off-policy evaluation, representing an important step in addressing the challenges posed by large action spaces. The proposed MIPS estimator not only enhances theoretical understanding but also offers tangible improvements for real-world applications in AI and machine learning.

Youtube Logo Streamline Icon: https://streamlinehq.com