Quick-Draw Bandits: Quickly Optimizing in Nonstationary Environments with Extremely Many Arms
Published 30 May 2025 in cs.LG and stat.ML | (2505.24692v1)
Abstract: Canonical algorithms for multi-armed bandits typically assume a stationary reward environment where the size of the action space (number of arms) is small. More recently developed methods typically relax only one of these assumptions: existing non-stationary bandit policies are designed for a small number of arms, while Lipschitz, linear, and Gaussian process bandit policies are designed to handle a large (or infinite) number of arms in stationary reward environments under constraints on the reward function. In this manuscript, we propose a novel policy to learn reward environments over a continuous space using Gaussian interpolation. We show that our method efficiently learns continuous Lipschitz reward functions with $\mathcal{O}*(\sqrt{T})$ cumulative regret. Furthermore, our method naturally extends to non-stationary problems with a simple modification. We finally demonstrate that our method is computationally favorable (100-10000x faster) and experimentally outperforms sliding Gaussian process policies on datasets with non-stationarity and an extremely large number of arms.
The paper introduces the Quick-Draw bandit policy that efficiently estimates Lipschitz reward functions over both spatial and temporal dimensions.
The proposed method leverages conditional Normal likelihoods to derive closed‐form estimates of mean and variance for effective UCB-based arm selection.
Experiments on simulated and real-world datasets show Quick-Draw outperforms baselines by achieving lower regret and higher performance metrics.
The paper "Quick-Draw Bandits: Quickly Optimizing in Nonstationary Environments with Extremely Many Arms" (2505.24692) addresses the challenging multi-armed bandit (MAB) problem in scenarios where the reward environment is non-stationary and the number of arms (action space) is extremely large, potentially continuous. Existing MAB algorithms typically handle either non-stationarity with a small number of arms or a large/continuous action space in stationary environments, but not both simultaneously. Canonical methods break down when the number of arms K is comparable to or larger than the timescale of environmental change (K≳Tw), making exhaustive exploration within a stable period infeasible.
The authors propose the Quick-Draw bandit policy, a novel approach designed to efficiently learn Lipschitz reward functions over a continuous feature space that also change smoothly over time. The core idea is to model the expected payout function μ(x,t) probabilistically, specifically assuming a conditional Normal likelihood for an observation at point x at time t given a past observation (xs,ys) at time ts. The variance of this conditional likelihood, σ^s2(x,t), is modeled as a function of the spatial distance D(x,xs) and temporal distance (t−ts):
σ^s2(x,t)≡ρ2+(ℓxD(x,xs))2+(ℓtt−ts)2
where ρ2 represents irreducible noise, ℓx is a spatial bandwidth, and ℓt is a temporal bandwidth.
The policy combines information from all past observations DT={(xs,ys,ts)}s=1T by assuming the joint likelihood is a product of these conditional likelihoods. For Gaussian distributions, this product is also Gaussian, resulting in closed-form expressions for the estimated mean μ^T(x,t) and variance Σ^T2(x,t). These estimates are given by:
Σ^T2(x,t)=[s=1∑Tσ^s2(x,t)1]−1
μ^T(x,t)=[s=1∑Tσ^s2(x,t)ys]Σ^T2(x,t)
The Quick-Draw policy is a Upper Confidence Bound (UCB) type algorithm. At each round t, it calculates a UCB index for each arm k (or potential point xk) using the estimated mean and variance:
UCBk,t=min(μ^T(xk,t)+γT+1Σ^T(xk,t),1)
where γT+1 is a scaling constant, and 1 is a ceiling based on the assumed bounded payout range [0,1]. The policy then selects the arm with the maximum UCB index.
From an implementation perspective, the Quick-Draw policy requires storing past observations (arm, payout, time). At each step, to compute the UCB for a given arm, it iterates through all past observations to calculate the weighted mean and variance. The weights are determined by the inverse of the uncertainty σ^s2, which depends on the distance in space and time to the current arm and round. A brute-force implementation calculates Σ^T2 and μ^T for all K arms at each round T, leading to O(K⋅T) complexity per round if all past observations are processed naively. However, if past uncertainty contributions 1/σ^s2(xk,t) are cached for each arm-time pair relative to previous observations, each update might approach O(K) if only the new observation needs to be added. A more efficient implementation could maintain the sums ∑s1/σ^s2 and ∑sys/σ^s2 for each arm iteratively.
Comparing to Gaussian Process (GP) bandits, the Quick-Draw policy can be viewed as a form of kernel interpolation similar to Nadaraya-Watson estimation, while GP bandits rely on matrix inversion of a kernel matrix. This difference gives Quick-Draw a significant computational advantage. While exact GP bandits have O(T3) complexity per round (for updating the model based on T observations), Quick-Draw is much faster, scaling linearly or near-linearly in the number of past observations T for calculating the mean and variance. Experiments show Quick-Draw being orders of magnitude faster than GP-UCB.
The paper provides a theoretical regret bound for the stationary case (only spatial Lipschitzness). Under assumptions, the cumulative regret is shown to be O(Tln2T), which is comparable to the regret bounds achieved by GP bandit policies in similar stationary settings, but Quick-Draw's bound applies to Lipschitz reward functions.
The effectiveness of the Quick-Draw policy is demonstrated through extensive experiments on both simulated data and a real-world dataset.
Simulated Experiments: Using Gaussian random fields with controllable spatial and temporal correlations, noise levels, and reward function sharpness, Quick-Draw is compared against Sliding-Window GP-UCB (SW-GP-UCB), sliding ϵ-greedy, restless bandit, and random sampling. Quick-Draw consistently outperforms the baselines across various challenging non-stationary settings, showing better adaptation to changes and more efficient exploration guided by spatial and temporal dependencies. It is particularly effective when spatial and temporal correlations are significant. The hyperparameters ℓx and ℓt exhibit robustness; performance is relatively insensitive to their exact values within a reasonable range (e.g., around 1 when distances are normalized to [0, 1]).
Open Bandit Dataset Evaluation: The policy is evaluated on a large-scale public dataset from an online advertising platform, aiming to maximize click-through rate (CTR) for different products (arms) presented to users (contexts). The problem involves 46 arms and exhibits non-stationarity over time. Using Inverse Propensity Scoring (IPS) for off-policy evaluation, Quick-Draw achieves an estimated CTR of 3.51%, significantly higher than random (0.49%), SW-GP-UCB (0.57%), restless bandit (0.98%), and sliding ϵ-greedy (2.12%). This demonstrates the policy's practical applicability and superior performance in real-world, non-stationary environments with a moderately large number of arms.
In summary, the Quick-Draw bandit policy offers a practical and computationally efficient solution for multi-armed bandit problems in challenging real-world scenarios characterized by both non-stationarity and a large action space. By explicitly modeling the decay of information over spatial and temporal distance, it effectively balances exploration and exploitation, achieving lower regret and higher performance compared to existing methods while being significantly faster than methods like GP bandits. The policy's robustness to hyperparameter tuning further simplifies its implementation and deployment.
“Emergent Mind helps me see which AI papers have caught fire online.”
Philip
Creator, AI Explained on YouTube
Sign up for free to explore the frontiers of research
Discover trending papers, chat with arXiv, and track the latest research shaping the future of science and technology.Discover trending papers, chat with arXiv, and more.