Two-State Bernoulli Bandits
- Two-state Bernoulli bandits are a decision-theoretic model with two arms generating binary rewards using unknown parameters, which exemplifies the exploration–exploitation tradeoff.
- The framework employs Bayesian inference, Beta priors, and dynamic programming to update beliefs and optimize cumulative rewards over finite or infinite horizons.
- Analyses cover regret bounds across different gap regimes and explore extensions like streaming and dynamic bandits to enhance both theoretical insights and practical applications.
A two-state Bernoulli bandit is a canonical stochastic decision problem consisting of two arms (actions), each generating i.i.d. Bernoulli rewards with unknown parameters. At each round, a decision-maker selects one arm to pull, observes a binary reward, and aims to optimize a cumulative objective—such as total expected reward or minimal regret—over a finite or infinite horizon. This setting is used extensively to formalize and study fundamental exploration–exploitation tradeoffs in sequential learning, with direct relevance to statistics, decision theory, reinforcement learning, and information theory.
1. Mathematical Formulation and Bayesian Principles
A two-state Bernoulli bandit specifies two arms , with unknown success probabilities . Pulling arm at stage yields reward distributed as . The Bayesian approach imposes independent conjugate Beta priors for each arm; the system's state can be represented as .
The objective over a horizon with discount factor is to maximize
where is a sequential allocation policy based on observed data. Posterior updates for arm after pulling and observing are , while the other arm's parameters remain unchanged.
The value function describes the maximal expected discounted payoff from state , recursively: with action-values
and a similar formula for by symmetry (Yu, 2011, Jacko, 2019).
2. Structure of Optimal Policies and Index Rules
The infinite-horizon discounted problem admits a remarkable structure: the optimal policy is an index rule based on the Gittins index for each arm. This index is the unique solution to
Each epoch, the arm with higher index is pulled. This result is a consequence of two monotonicity theorems:
- Monotonicity in prior mean: At fixed prior weight , higher prior mean makes the arm more attractive; increases with .
- Monotonicity in prior weight: At fixed mean, greater prior weight (i.e., more data or less uncertainty) makes the arm less attractive; decreases with (Yu, 2011).
In the finite-horizon setting, exact dynamic programming (DP) yields the Bayes-optimal policy via backward induction on the joint state . The computational complexity is , where is the horizon, yet for two arms this is feasible for large on modern hardware (e.g., offline, online in seconds) (Jacko, 2019).
3. Regret Analysis and Asymptotics
In the symmetric Bernoulli bandit (), minimax regret analysis has been associated with the solution of a linear heat equation. The regret and pseudoregret over horizon obey sharp asymptotics determined by the gap :
- Small-gap regime ():
Pseudoregret:
- Medium-gap regime ():
with explicit as a function of (Kobzar et al., 2022).
- Large-gap regime ():
until regret saturation at fixed .
Non-asymptotic upper and lower bounds originate from viewing the DP value recursion as a finite-difference approximation to the heat equation; discretization error is . This approach yields explicit leading-order regret rates across all regimes (Kobzar et al., 2022).
4. Algorithmic Approaches and Benchmarks
Bayes-optimal DP remains the gold standard for moderate horizons, though various heuristic and index-based algorithms are widely analyzed:
- Gittins Index (infinite discounted horizon): Optimal, as previously discussed.
- Whittle Index (finite horizon): Used as an approximation; requires horizon-dependent truncation.
- Thompson Sampling: At each time, sample from the Beta posteriors and play . Empirically strong but lacks matching regret guarantees in finite horizons.
- Optimistic UCB-style algorithms: Compute , . Classical is substantially suboptimal, even with tuning.
- Hybrid heuristics: e.g., BLFF+BM and BLFF+0.18-UCB achieve within of Bayes-optimal DP (Jacko, 2019).
- OFUGLB (Optimistic Frequentist Upper-bound for Generalized Linear Bandits): Constructs a likelihood-ratio confidence sequence for each arm and pulls the arm with the highest upper confidence bound on success probability. With high probability,
for two-state Bernoulli bandits, matching the optimal UCB rates up to lower-order terms and avoiding polynomial dependence on (the constraint on the logistic parameter) (Lee et al., 2024).
For moderate , exact DP achieves near-constant regret; UCB with standard can be worse than DP, while tuned UCB still incurs higher regret (Jacko, 2019).
5. Exploration–Exploitation Dilemma and Information-Theoretic Policies
The Bayesian formalism yields a rigorous understanding of exploration–exploitation. Monotonicity theorems imply that higher prior mean is inherently more attractive (exploitation), but at equal mean, lower prior weight (greater uncertainty) confers more value (exploration incentive) (Yu, 2011). This quantifies the exploration bonus analytically.
Information-directed sampling (IDS) policies formalize an explicit trade-off between one-step regret and information gain (reduction in posterior entropy). For the symmetric two-state Bernoulli bandit, the IDS policy coincides with the myopic posterior mean-maximizing rule and achieves bounded cumulative regret. In more challenging settings (e.g., one fair coin and one biased coin), IDS achieves regret, matching the Lai–Robbins lower bound (Hirling et al., 23 Dec 2025). The IDS framework introduces a tuning parameter to interpolate between exploitation and exploration: where is expected regret, and the expected information gain.
6. Generalizations and Variants
Several extensions modify the canonical model:
- Streaming (Online) Bernoulli Bandit: Each bandit (arm) is encountered exactly once in a stream and, if skipped, cannot be revisited. Threshold-based "skip or stay" policies emerge as nearly optimal, with per-pull expected loss decaying polynomially in pool size (not as in revisitable MABs). The classical trade-off disappears: exploration is conducted via "skipping" rather than repeated sampling (Roy et al., 2017).
- Dynamic Bernoulli Bandits: Each arm's reward distribution evolves as a two-state Markov chain between high and low success probabilities. Adaptive Forgetting Factor (AFF) algorithms (AFF--Greedy, AFF-UCB, AFF-TS) discount old observations using a learnable parameter, improving performance over classic algorithms in environments with changing means. Empirically, AFF-based Thompson sampling achieves the best simulated regret under both slow and fast switching (Lu et al., 2017).
- Frequentist vs Bayesian Optimality: Bayes-optimal DP is only optimal with respect to the chosen prior; it is not minimax-optimal for fixed parameters, and heuristic rules may outperform DP for some configurations (Jacko, 2019). The Gittins policy does not ensure complete learning in all settings; finite-horizon limits circumvent this issue.
7. Empirical, Computational, and Practical Considerations
Modern implementations can solve the two-state Bernoulli bandit optimally via DP for horizons up to thousands in practical time and memory (e.g., BinaryBandit package in Julia) (Jacko, 2019). Empirical benchmarks confirm that many heuristics under-explore or suffer 2–10× increased regret compared to DP. Efficient index computation (Gittins, Whittle) reduces dimensionality from the full joint state to per-arm subproblems, but nontrivial dynamic programming is still required; closed-form indices are unavailable (Yu, 2011).
Classic myths—such as DP intractability, universal optimality of UCB, or inevitable logarithmic regret growth—are explicitly addressed and refuted in the recent literature. Optimal, near-optimal, and robust algorithmic options now exist across stochastic, adversarial, and dynamic two-state Bernoulli bandit scenarios (Jacko, 2019, Hirling et al., 23 Dec 2025, Lee et al., 2024).
Key References:
- "Structural Properties of Bayesian Bandits with Exponential Family Distributions" (Yu, 2011)
- "The Finite-Horizon Two-Armed Bandit Problem with Binary Responses" (Jacko, 2019)
- "A PDE-Based Analysis of the Symmetric Two-Armed Bernoulli Bandit" (Kobzar et al., 2022)
- "Online Multi-Armed Bandit" (Roy et al., 2017)
- "On Adaptive Estimation for Dynamic Bernoulli Bandits" (Lu et al., 2017)
- "Information-directed sampling for bandits: a primer" (Hirling et al., 23 Dec 2025)
- "A Unified Confidence Sequence for Generalized Linear Models, with Applications to Bandits" (Lee et al., 2024)