Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Regularized Sparse Logistic Regression (2309.05925v2)

Published 12 Sep 2023 in cs.LG, cs.AI, and stat.ML

Abstract: Sparse logistic regression is for classification and feature selection simultaneously. Although many studies have been done to solve $\ell_1$-regularized logistic regression, there is no equivalently abundant work on solving sparse logistic regression with nonconvex regularization term. In this paper, we propose a unified framework to solve $\ell_1$-regularized logistic regression, which can be naturally extended to nonconvex regularization term, as long as certain requirement is satisfied. In addition, we also utilize a different line search criteria to guarantee monotone convergence for various regularization terms. Empirical experiments on binary classification tasks with real-world datasets demonstrate our proposed algorithms are capable of performing classification and feature selection effectively at a lower computational cost.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. High-dimensional classification by sparse logistic regression. IEEE Transactions on Information Theory, 65(5):3068–3079, 2018.
  2. Uci machine learning repository, 2007.
  3. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1):183–202, 2009.
  4. Thomas E Booth. Power iteration method for the several largest eigenvalues and eigenfunctions. Nuclear science and engineering, 154(1):48–62, 2006.
  5. Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. The annals of applied statistics, 5(1):232, 2011.
  6. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American statistical Association, 96(456):1348–1360, 2001.
  7. Large-scale bayesian logistic regression for text categorization. technometrics, 49(3):291–304, 2007.
  8. Honor: Hybrid optimization for non-convex regularized problems. Advances in Neural Information Processing Systems, 28, 2015.
  9. Statistical learning with sparsity. Monographs on statistics and applied probability, 143:143, 2015.
  10. Douglas M Hawkins. The problem of overfitting. Journal of chemical information and computer sciences, 44(1):1–12, 2004.
  11. An incremental aggregated proximal admm for linearly constrained nonconvex optimization with application to sparse logistic regression problems. Journal of Computational and Applied Mathematics, 390:113384, 2021.
  12. An interior-point method for large-scale l1-regularized logistic regression. Journal of Machine learning research, 8(Jul):1519–1555, 2007.
  13. Proximal newton-type methods for minimizing composite functions. SIAM Journal on Optimization, 24(3):1420–1443, 2014.
  14. Efficient l~ 1 regularized logistic regression. In Aaai, volume 6, pages 401–408, 2006.
  15. Graph-sparse logistic regression. arXiv preprint arXiv:1712.05510, 2017.
  16. Large-scale sparse logistic regression. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 547–556, 2009.
  17. High-order co-clustering via strictly orthogonal and symmetric l1-norm nonnegative matrix tri-factorization. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018.
  18. Learning multi-instance enriched image representations via non-greedy ratio maximization of the l1-norm distances. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7727–7735, 2018.
  19. Regularized m-estimators with nonconvexity: Statistical and algorithmic theory for local optima. Advances in Neural Information Processing Systems, 26, 2013.
  20. Bayesian multinomial logistic regression for author identification. In AIP conference proceedings, volume 803, pages 509–516. American Institute of Physics, 2005.
  21. Mist: l 0 sparse linear regression with momentum. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3551–3555. IEEE, 2015.
  22. Sparsenet: Coordinate descent with nonconvex penalties. Journal of the American Statistical Association, 106(495):1125–1138, 2011.
  23. Andrew Y Ng. Feature selection, l 1 vs. l 2 regularization, and rotational invariance. In Proceedings of the twenty-first international conference on Machine learning, page 78, 2004.
  24. Fast optimization methods for l1 regularization: A comparative study and two new approaches. In European Conference on Machine Learning, pages 286–297. Springer, 2007.
  25. A fast hybrid algorithm for large-scale l1-regularized logistic regression. The Journal of Machine Learning Research, 11:713–741, 2010.
  26. Sparse bayesian binary logistic regression using the split-and-augmented gibbs sampler. In 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), pages 1–6. IEEE, 2018.
  27. Greedy projected gradient-newton method for sparse logistic regression. IEEE transactions on neural networks and learning systems, 31(2):527–538, 2019.
  28. A survey on nonconvex regularization-based sparse and low-rank recovery in signal processing, statistics, and machine learning. IEEE Access, 6:69883–69906, 2018.
  29. Feature screening strategy for non-convex sparse logistic regression with log sum penalty. Information Sciences, 2023.
  30. The concave-convex procedure (cccp). Advances in neural information processing systems, 14, 2001.
  31. Cun-Hui Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of statistics, 38(2):894–942, 2010.
  32. Enriched robust multi-view kernel subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1993–2002, 2022.
  33. Multi-task learning with prior information. In Proceedings of the 2023 SIAM International Conference on Data Mining (SDM), pages 586–594. SIAM, 2023.
  34. Accelerating overrelaxed and monotone fast iterative shrinkage-thresholding algorithms with line search for sparse reconstructions. IEEE Transactions on Image Processing, 26(7):3569–3578, 2017.
Citations (1)

Summary

We haven't generated a summary for this paper yet.