A Differentiable Alternative to the Lasso Penalty (1609.04985v1)
Abstract: Regularized regression has become very popular nowadays, particularly on high-dimensional problems where the addition of a penalty term to the log-likelihood allows inference where traditional methods fail. A number of penalties have been proposed in the literature, such as lasso, SCAD, ridge and elastic net to name a few. Despite their advantages and remarkable performance in rather extreme settings, where $p \gg n$, all these penalties, with the exception of ridge, are non-differentiable at zero. This can be a limitation in certain cases, such as computational efficiency of parameter estimation in non-linear models or derivation of estimators of the degrees of freedom for model selection criteria. With this paper, we provide the scientific community with a differentiable penalty, which can be used in any situation, but particularly where differentiability plays a key role. We show some desirable features of this function and prove theoretical properties of the resulting estimators within a regularized regression context. A simulation study and the analysis of a real dataset show overall a good performance under different scenarios. The method is implemented in the R package DLASSO freely available from CRAN.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.