Powerful randomization tests for subgroup analysis (2504.21572v1)
Abstract: Randomization tests are widely used to generate valid $p$-values for testing sharp null hypotheses in finite-population causal inference. This article extends their application to subgroup analysis. We show that directly testing subgroup null hypotheses may lack power due to small subgroup sizes. Incorporating an estimator of the conditional average treatment effect (CATE) can substantially improve power but requires splitting the treatment variables between estimation and testing to preserve finite-sample validity. To this end, we propose BaR-learner, a Bayesian extension of the popular method R-learner for CATE estimation. BaR-learner imputes the treatment variables reserved for randomization tests, reducing information loss due to sample-splitting. Furthermore, we show that the treatment variables most informative for training BaR-learner are different from those most valuable for increasing test power. Motivated by this insight, we introduce AdaSplit, a sample-splitting procedure that adaptively allocates units between estimation and testing. Simulation studies demonstrate that our method yields more powerful randomization tests than baselines that omit CATE estimation or rely on random sample-splitting. We also apply our method to a blood pressure intervention trial, identifying patient subgroups with significant treatment effects.