Model-free Variable Selection and Inference for High-dimensional Data (2410.19031v1)
Abstract: Statistical inference is challenging in high-dimensional data analysis. Existing post-selection inference requires an explicitly specified regression model as well as sparsity in the regression model. The performance of such procedures can be poor under either misspecified nonlinear models or a violation of the sparsity assumption. In this paper, we propose a sufficient dimension association (SDA) technique that measures the association between each predictor and the response variable conditioning on other predictors. Our proposed SDA method requires neither a specific form of regression model nor sparsity in the regression. Alternatively, our method assumes normalized or Gaussian-distributed predictors with a Markov blanket property. We propose an estimator for the SDA and prove asymptotic properties for the estimator. For simultaneous hypothesis testing and variable selection, we construct test statistics based on the Kolmogorov-Smirnov principle and the Cram{\"e}r-von-Mises principle. A multiplier bootstrap approach is used for computing critical values and $p$-values. Extensive simulation studies have been conducted to show the validity and superiority of our SDA method. Gene expression data from the Alzheimer Disease Neuroimaging Initiative are used to demonstrate a real application.