MultiBanAbs: Online FDR Control for A/B/n Testing
- MultiBanAbs is a doubly-sequential online framework that integrates multi-armed bandit (MAB) testing with always-valid p-values and online FDR control.
- It employs an adaptive best-arm identification method using a LUCB algorithm variant to optimize sample efficiency while rigorously controlling error rates.
- The approach enables interleaved A/B/n experiments with near-optimal discovery power and 50%-70% reduced sample complexity compared to traditional fixed-sample methods.
MultiBanAbs refers to a doubly-sequential online framework for running a sequence of bandit-based “A/B/n” tests, each with optimal sample efficiency and anytime false discovery rate (FDR) control. Unlike classical workflows where each A/B test is separately fixed-sample or sequential, MultiBanAbs encapsulates a protocol in which each experiment is an adaptive best-arm identification instance, and the decision to declare a discovery is governed by an online FDR algorithm. This approach enables massive-scale, interleaved (possibly overlapping) multiple hypothesis testing while ensuring statistical rigor and near-optimal efficiency (Yang et al., 2017).
1. Formal Problem Definition
MultiBanAbs addresses scenarios where a stream of experiments is tested sequentially, each corresponding to a multi-armed bandit (MAB) with one control arm () and alternatives (). For each experiment , the hypotheses are: where determines the minimum margin for declaring a successful alternative.
The core requirement is to test many such MAB instances, adaptively and in parallel, while controlling the online false discovery rate at level , globally across all tests at all times (i.e., under arbitrary data-dependent stopping).
2. Always-Valid Sequential -Values for Bandit Instances
A central technical innovation in MultiBanAbs is the construction of always-valid -values for each best-arm MAB instance, ensuring validity under optional stopping. At any stopping time (potentially dependent on prior data and test outcomes),
The -value process is built using non-asymptotic law-of-the-iterated-logarithm confidence bands: with upper and lower confidence bounds per arm,
The per-instance -value is then
and the overall -value for the experiment is the running minimum across arms and timepoints.
3. Best-Arm Identification with Controlled Error
Within each MAB instance, MultiBanAbs runs a variant of the LUCB algorithm parameterized by the required error tolerance () and the confidence parameter set by the associated FDR procedure:
- Each arm is initially pulled once.
- At each round, arms with largest empirical means and upper confidence bounds are evaluated.
- The stopping condition is declared when the control is not demonstrably beaten by any variant, or a treatment arm is highly likely to be superior.
- On stopping, an always-valid -value is returned corresponding to the smallest time at which evidence for an alternative arises.
Sample complexity is near-optimal: the bandit subroutine halts in
where encodes the entailed effect sizes after accounting for (Yang et al., 2017).
4. Online FDR Control Framework Integration
MultiBanAbs pipelines the bandit-level -values into an online FDR control algorithm, such as LORD, SAFFRON, or general -investing protocols. These algorithms sequentially update a test-specific confidence threshold for each experiment while properly accounting for prior rejections, continuing the following protocol:
- Obtain test level using the online FDR rule and past history.
- Run the MAB best-arm algorithm at confidence ; on stopping, produce -value .
- If declare discovery, otherwise accept the null.
- Update the FDR wealth and record status for use in subsequent tests.
This architecture ensures
uniformly over all horizons and arbitrary (possibly data-dependent) stopping (Yang et al., 2017).
5. Statistical Guarantees and Sample Efficiency
The core theoretical results established are:
- Anytime mFDR and FDR control: The protocol is guaranteed to maintain and at every regardless of adaptive sampling or stopping.
- Sample-optimal discovery rate: The best-arm MAB subroutine, run at confidence , terminates in pulls, matching classical MAB efficiency up to log factors.
- High power: The best-arm discovery rate (fraction of true alternatives declared discoveries) remains bounded away from zero; power is competitive with non-MAB fixed-sample alternatives.
6. Empirical Validation and Practical Impact
Evaluation on both simulated bandit data (Gaussian and Bernoulli arms) and real-world settings (e.g., New Yorker Cartoon Caption Contest) validates the MultiBanAbs framework:
- Achieves – reduction in sample complexity relative to uniform-sampling A/B/n strategies at equivalent power.
- Maintains realized FDR close to the prescribed even under massive scale and adaptive monitoring.
- Outperforms naive combinations of bandit selection and independent tests, which fail to provide rigorous error bounds, and outperforms Bonferroni-FWER correction (which is overly conservative in this adaptive regime) (Yang et al., 2017).
\
7. Context, Extensions, and Related Directions
MultiBanAbs unifies classical online multiple testing and multi-armed bandit best-arm identification into a single adaptive process. It is particularly impactful in large-scale settings where continuous monitoring, efficiency, and statistical validity are critical (e.g., digital A/B/n experimentation, high-throughput scientific discovery).
Key directions identified for further research include:
- Adapting the framework to contextual and structured bandit settings.
- Incorporating early stopping rules and dynamic resource allocation.
- Extending to high-dimensional treatment selection and streaming data environments.
The framework is foundational for practitioners requiring rigorous discovery with minimal sampling overhead and robust error control, supporting scalable experimentation in both academic and industrial domains (Yang et al., 2017).