Selecting Penalty Parameters of High-Dimensional M-Estimators using Bootstrapping after Cross-Validation
Abstract: We develop a new method for selecting the penalty parameter for $\ell_{1}$-penalized M-estimators in high dimensions, which we refer to as bootstrapping after cross-validation. We derive rates of convergence for the corresponding $\ell_1$-penalized M-estimator and also for the post-$\ell_1$-penalized M-estimator, which refits the non-zero entries of the former estimator without penalty in the criterion function. We demonstrate via simulations that our methods are not dominated by cross-validation in terms of estimation errors and can outperform cross-validation in terms of inference. As an empirical illustration, we revisit Fryer Jr (2019), who investigated racial differences in police use of force, and confirm his findings.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.