Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 43 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 455 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Sparse mean localization by information theory (1704.00575v1)

Published 3 Apr 2017 in stat.AP, cs.IT, and math.IT

Abstract: Sparse feature selection is necessary when we fit statistical models, we have access to a large group of features, don't know which are relevant, but assume that most are not. Alternatively, when the number of features is larger than the available data the model becomes over parametrized and the sparse feature selection task involves selecting the most informative variables for the model. When the model is a simple location model and the number of relevant features does not grow with the total number of features, sparse feature selection corresponds to sparse mean estimation. We deal with a simplified mean estimation problem consisting of an additive model with gaussian noise and mean that is in a restricted, finite hypothesis space. This restriction simplifies the mean estimation problem into a selection problem of combinatorial nature. Although the hypothesis space is finite, its size is exponential in the dimension of the mean. In limited data settings and when the size of the hypothesis space depends on the amount of data or on the dimension of the data, choosing an approximation set of hypotheses is a desirable approach. Choosing a set of hypotheses instead of a single one implies replacing the bias-variance trade off with a resolution-stability trade off. Generalization capacity provides a resolution selection criterion based on allowing the learning algorithm to communicate the largest amount of information in the data to the learner without error. In this work the theory of approximation set coding and generalization capacity is explored in order to understand this approach. We then apply the generalization capacity criterion to the simplified sparse mean estimation problem and detail an importance sampling algorithm which at once solves the difficulty posed by large hypothesis spaces and the slow convergence of uniform sampling algorithms.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)