Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Estimation of Treatment Effects Under Nonstationarity via Truncated Difference-in-Q's (2506.05308v1)

Published 5 Jun 2025 in stat.ME

Abstract: Randomized controlled experiments (''A/B testing'') are fundamental for assessing interventions in dynamic technology-driven environments, such as recommendation systems, online marketplaces, and digital health interventions. In these systems, interventions typically impact not only the current state of the system, but also future states; therefore, accurate estimation of the global average treatment effect (or GATE) from experiments requires accounting for the dynamic temporal behavior of the system. To address this, recent literature has analyzed a range of estimators applied to Bernoulli randomized experiments in stationary environments, ranging from the standard difference-in-means (DM) estimator to methods building on reinforcement learning techniques, such as off-policy evaluation and the recently proposed difference-in-Q's (DQ) estimator. However, all these estimators exhibit high bias and variance when the environment is nonstationary. This paper addresses the challenge of estimation under nonstationarity. We show that a simple extension of the DM estimator using differences in truncated outcome trajectories yields favorable bias and variance in nonstationary Markovian settings. Our theoretical analysis establishes this result by first showing that the truncated estimator is in fact estimating an appropriate policy gradient that can be expressed as a difference in Q-values; thus we refer to our estimator as the truncated DQ estimator (by analogy to the DQ estimator). We then show that the corresponding policy gradient is a first-order approximation to the GATE. Combining these insights yields our bias and variance bounds. We validate our results through synthetic and realistic simulations-including hospital and ride-sharing settings-and show that a well-calibrated truncated DQ estimator achieves low bias and variance even in nonstationary environments.

Summary

We haven't generated a summary for this paper yet.