Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probabilistic Best Subset Selection via Gradient-Based Optimization (2006.06448v4)

Published 11 Jun 2020 in stat.ME

Abstract: In high-dimensional statistics, variable selection recovers the latent sparse patterns from all possible covariate combinations. This paper proposes a novel optimization method to solve the exact L0-regularized regression problem, which is also known as the best subset selection. We reformulate the optimization problem from a discrete space to a continuous one via probabilistic reparameterization. The new objective function is differentiable but its gradient often cannot be computed in a closed form. Then we propose a family of unbiased gradient estimators to optimize the best subset selection objectives by the stochastic gradient descent. Within this family, we identify the estimator with uniformly minimum variance. Theoretically, we study the general conditions under which the method is guaranteed to converge to the ground truth in expectation. The proposed method can find the true regression model from thousands of covariates in seconds. In a wide variety of synthetic and semi-synthetic data, the proposed method outperforms existing variable selection tools based on the relaxed penalties, coordinate descent, and mixed integer optimization in both sparse pattern recovery and out-of-sample prediction.

Citations (19)

Summary

We haven't generated a summary for this paper yet.