Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training a neural netwok for data reduction and better generalization (2411.17180v3)

Published 26 Nov 2024 in stat.ML and cs.LG

Abstract: At the time of environmental concerns about artificial intelligence, in particular its need for greedy storage and computation, sparsity inducing neural networks offer a promising path towards frugality and solution for less waste. Sparse learners compress the inputs (features) by selecting only the ones needed for good generalization. A human scientist can then give an intelligent interpretation to the few selected features. If genes are the inputs and cancer type is the output, then the selected genes give the cancerologist clues on what genes have an effect on certain cancers. LASSO-type regularization leads to good input selection for linear associations, but few attempts have been made for nonlinear associations modeled as an artificial neural network. A stringent but efficient way of testing whether a feature selection method works is to check if a phase transition occurs in the probability of retrieving the relevant features, as observed and mathematically studied for linear models. Our method achieves just so for artificial neural networks, and, on real data, it has the best compromise between number of selected features and generalization performance. Our method is flexible, applying to complex models ranging from shallow to deep artificial neural networks and supporting various cost functions and sparsity-promoting penalties. It does not rely on cross-validation or on a validation set to select its single regularization parameter making it user-friendly. Our approach can be seen as a form of compressed sensing for complex models, allowing to distill high-dimensional data into a compact, interpretable subset of meaningful features, just the opposite of a black box. A python package is available at https://github.com/VcMaxouuu/AnnHarderLasso containing all the simulations and ready-to-use models.

Summary

We haven't generated a summary for this paper yet.