Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 77 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 175 tok/s Pro
GPT OSS 120B 454 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Feature selection in linear SVMs via a hard cardinality constraint: a scalable SDP decomposition approach (2404.10099v2)

Published 15 Apr 2024 in math.OC and cs.LG

Abstract: In this paper, we study the embedded feature selection problem in linear Support Vector Machines (SVMs), in which a cardinality constraint is employed, leading to an interpretable classification model. The problem is NP-hard due to the presence of the cardinality constraint, even though the original linear SVM amounts to a problem solvable in polynomial time. To handle the hard problem, we first introduce two mixed-integer formulations for which novel semidefinite relaxations are proposed. Exploiting the sparsity pattern of the relaxations, we decompose the problems and obtain equivalent relaxations in a much smaller cone, making the conic approaches scalable. To make the best usage of the decomposed relaxations, we propose heuristics using the information of its optimal solution. Moreover, an exact procedure is proposed by solving a sequence of mixed-integer decomposed semidefinite optimization problems. Numerical results on classical benchmarking datasets are reported, showing the efficiency and effectiveness of our approach.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Positive semidefinite matrices with a given sparsity pattern. Linear Algebra and its Applications, 107:101–149, 1988.
  2. Feature selection for classification models via bilevel optimization. Computers & Operations Research, 106:156–168, 2019.
  3. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences, 96(12):6745–6750, 1999.
  4. Kernel search: A general heuristic for the multi-dimensional knapsack problem. Computers & Operations Research, 37(11):2017–2026, 2010.
  5. Kernel search: A new heuristic framework for portfolio selection. Computational Optimization and Applications, 51:345–361, 2012.
  6. Haldun Aytug. Feature selection for support vector machines using generalized Benders decomposition. European Journal of Operational Research, 244(1):210–218, 2015.
  7. Completely Positive Matrices. World Scientific, 2003.
  8. Feature selection via concave minimization and support vector machines. In International Conference on Machine Learning, 1998.
  9. Direct convex relaxations of sparse SVM. In Proceedings of the 24th International Conference on Machine Learning, pages 145–153, 2007.
  10. Support-vector networks. Machine Learning, 20(3):273––297, 1995.
  11. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml, 2017.
  12. Margin optimal classification trees. Computers & Operations Research, 161:106441, 2024.
  13. Liblinear: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874, 2008.
  14. A feature selection Newton method for support vector machine classification. Computational Optimization and Applications, 28:185–202, 07 2004.
  15. High dimensional data classification and feature selection using support vector machines. European Journal of Operational Research, 265(3):993–1004, 2018.
  16. Kernel search for the capacitated facility location problem. Journal of Heuristics, 18(6):877–917, 2012.
  17. Gene selection for cancer classification using support vector machines. Machine Learning, 46:389–422, 2002.
  18. The entire regularization path for the support vector machine. Journal of Machine Learning Research, 5(Oct):1391–1415, 2004.
  19. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2nd edition, 2009.
  20. Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning. Springer, 2021.
  21. Mixed integer linear programming for feature selection in support vector machine. Discrete Applied Mathematics, 261:276–304, 2019.
  22. Feature selection for support vector machines via mixed integer linear programming. Information Sciences, 279:163–175, 2014.
  23. Minh Hoai Nguyen and Fernando De la Torre. Optimal feature selection for support vector machines. Pattern Recognition, 43(3):584–591, 2010.
  24. Diffuse large b-cell lymphoma outcome prediction by gene-expression profiling and supervised machine learning. Nature Medicine, 8(1):68–74, 2002.
  25. Vladimir Vapnik. The Nature of Statistical Learning Theory. Springer Science & Business Media, 1999.
  26. The doubly regularized support vector machine. Statistica Sinica, 16(2):589–615, 2006.
  27. Use of the zero norm with linear models and kernel methods. Journal of Machine Learning Research, 3:1439–1461, 2003.
  28. A comparison of optimization methods and software for large-scale l1-regularized linear classification. Journal of Machine Learning Research, 11:3183–3234, 2010.
  29. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B, 67(2):301–320, 2005.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 1 like.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube