Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 174 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 34 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 438 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Fast SVM-based Feature Elimination Utilizing Data Radius, Hard-Margin, Soft-Margin (1210.4460v4)

Published 16 Oct 2012 in stat.ML and cs.LG

Abstract: Margin maximization in the hard-margin sense, proposed as feature elimination criterion by the MFE-LO method, is combined here with data radius utilization to further aim to lower generalization error, as several published bounds and bound-related formulations pertaining to lowering misclassification risk (or error) pertain to radius e.g. product of squared radius and weight vector squared norm. Additionally, we propose additional novel feature elimination criteria that, while instead being in the soft-margin sense, too can utilize data radius, utilizing previously published bound-related formulations for approaching radius for the soft-margin sense, whereby e.g. a focus was on the principle stated therein as "finding a bound whose minima are in a region with small leave-one-out values may be more important than its tightness". These additional criteria we propose combine radius utilization with a novel and computationally low-cost soft-margin light classifier retraining approach we devise named QP1; QP1 is the soft-margin alternative to the hard-margin LO. We correct an error in the MFE-LO description, find MFE-LO achieves the highest generalization accuracy among the previously published margin-based feature elimination (MFE) methods, discuss some limitations of MFE-LO, and find our novel methods herein outperform MFE-LO, attain lower test set classification error rate. On several datasets that each both have a large number of features and fall into the `large features few samples' dataset category, and on datasets with lower (low-to-intermediate) number of features, our novel methods give promising results. Especially, among our methods the tunable ones, that do not employ (the non-tunable) LO approach, can be tuned more aggressively in the future than herein, to aim to demonstrate for them even higher performance than herein.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.