Adjusting confidence intervals under covariate-adaptive randomization in non-inferiority and equivalence trials (2312.15619v1)
Abstract: Regulatory authorities guide the use of permutation tests or randomization tests so as not to increase the type-I error rate when applying covariate-adaptive randomization in randomized clinical trials. For non-inferiority and equivalence trials, this paper derives adjusted confidence intervals using permutation and randomization methods, thus controlling the type-I error to be much closer to the pre-specified nominal significance level. We consider three variable types for the outcome of interest, namely normal, binary, and time-to-event variables for the adjusted confidence intervals. For normal variables, we show that the type-I error for the adjusted confidence interval holds the nominal significance level. However, we highlight a unique theoretical challenge for non-inferiority and equivalence trials: binary and time-to-event variables may not hold the nominal significance level when the model parameters are estimated by models that diverge from the data-generating model under the null hypothesis. To clarify these features, we present simulation results and evaluate the performance of the adjusted confidence intervals.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.