Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Strong screening rules for group-based SLOPE models (2405.15357v1)

Published 24 May 2024 in stat.ML, cs.LG, and stat.ME

Abstract: Tuning the regularization parameter in penalized regression models is an expensive task, requiring multiple models to be fit along a path of parameters. Strong screening rules drastically reduce computational costs by lowering the dimensionality of the input prior to fitting. We develop strong screening rules for group-based Sorted L-One Penalized Estimation (SLOPE) models: Group SLOPE and Sparse-group SLOPE. The developed rules are applicable for the wider family of group-based OWL models, including OSCAR. Our experiments on both synthetic and real data show that the screening rules significantly accelerate the fitting process. The screening rules make it accessible for group SLOPE and sparse-group SLOPE to be applied to high-dimensional datasets, particularly those encountered in genetics.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Exsis: Extended sure independence screening for ultrahigh-dimensional linear models. Signal Processing, 159:33–48, 2019. ISSN 0165-1684. doi: https://doi.org/10.1016/j.sigpro.2019.01.018.
  2. Safe screening rules for ℓ0subscriptℓ0\ell_{0}roman_ℓ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT-regression from perspective relaxations. In Proceedings of the 37th International Conference on Machine Learning, pages 421–430. PMLR, 2020.
  3. Fast OSCAR and OWL Regression via Safe Screening Rules. In Proceedings of the 37th International Conference on Machine Learning, pages 653–663. PMLR, 2020.
  4. SLOPE—Adaptive variable selection via convex optimization. The Annals of Applied Statistics, 9(3), 2015. ISSN 1932-6157. doi: 10.1214/15-AOAS842.
  5. Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR. Biometrics, 64(1):115–123, 2008. ISSN 0006-341X. doi: 10.1111/j.1541-0420.2007.00843.x.
  6. Group SLOPE – Adaptive Selection of Groups of Predictors. Journal of the American Statistical Association, 114(525):419–433, 2019. doi: 10.1080/01621459.2017.1411269.
  7. Molecular Classification of Crohn’s Disease and Ulcerative Colitis Patients Using Transcriptional Profiles in Peripheral Blood Mononuclear Cells. The Journal of Molecular Diagnostics, 8(1):51–61, 2006. ISSN 15251578. doi: 10.2353/jmoldx.2006.050079.
  8. On cross-validated Lasso in high dimensions. The Annals of Statistics, 49(3):1300 – 1317, 2021. doi: 10.1214/20-AOS2000.
  9. Xavier Dupuis and Patrick J C Tardivel. The Solution Path of SLOPE. Working paper, 2023. URL https://hal.science/hal-04100441.
  10. Least angle regression. The Annals of Statistics, 32(2):407–499, 2004. doi: 10.1214/009053604000000067.
  11. Safe feature elimination in sparse supervised learning. Technical Report UCB/EECS-2010-126, EECS Department, University of California, Berkeley, 2010.
  12. Safe rules for the identification of zeros in the solutions of the SLOPE problem. SIAM Journal on Mathematics of Data Science, 5(1):147–173, 2021. doi: 10.1137/21m1457631.
  13. Sure independence screening for ultrahigh dimensional feature space. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(5):849–911, 2008. doi: https://doi.org/10.1111/j.1467-9868.2008.00674.x.
  14. Sparse-group SLOPE: adaptive bi-level selection with FDR-control. arXiv preprint arXiv:2305.09467, 2023.
  15. A study on tuning parameter selection for the high-dimensional lasso. Journal of Statistical Computation and Simulation, 88(15):2865–2892, 2018. doi: 10.1080/00949655.2018.1491575.
  16. Nonlinear programming. In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, pages 481–492, Berkeley, Los Angeles, USA, 1950. University of California Press.
  17. The strong screening rule for SLOPE. In Advances in Neural Information Processing Systems, volume 33, pages 14592–14603. Curran Associates, Inc., 2020.
  18. Coordinate Descent for SLOPE. Proceedings of Machine Learning Research, 206:4802–4821, 2022. ISSN 26403498.
  19. sparsegl: An R Package for Estimating Sparse Group Lasso. arXiv preprint arXiv:2208.02942, 2022.
  20. A two-gene expression ratio predicts clinical outcome in breast cancer patients treated with tamoxifen. Cancer Cell, 5(6):607–616, 2004. ISSN 15356108. doi: 10.1016/j.ccr.2004.05.015.
  21. GAP Safe Screening Rules for Sparse-Group Lasso. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016a.
  22. Gap Safe screening rules for sparsity enforcing penalties. Journal of Machine Learning Research, 18, 2016b. ISSN 15337928.
  23. Renato Negrinho and André F T Martins. Orbit regularization. In Proceedings of the 27th International Conference on Neural Information Processing Systems, volume 2, pages 3221–3229. MIT Press, 2014.
  24. Shunichi Nomura. An Exact Solution Path Algorithm for SLOPE and Quasi-Spherical OSCAR. arXiv preprint arXiv:2010.15511, 2020.
  25. Safe screening of non-support vectors in pathwise svm computation. In Proceedings of the 30th International Conference on Machine Learning, pages 1382–1390. PMLR, 2013.
  26. Adaptive three operator splitting. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4085–4094. PMLR, 2018.
  27. The Geometry of Uniqueness, Sparsity and Clustering in Penalized Estimation. Journal of Machine Learning Research, 23:1–36, 2022.
  28. Simultaneous safe screening of features and samples in doubly sparse modeling. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1577–1586. PMLR, 2016.
  29. A Sparse-Group Lasso. Journal of Computational and Graphical Statistics, 22(2):231–245, 2013. ISSN 1061-8600. doi: 10.1080/10618600.2012.681250.
  30. Robert Tibshirani. Regression Shrinkage and Selection Via the Lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996. ISSN 00359246. doi: 10.1111/j.2517-6161.1996.tb02080.x.
  31. Strong rules for discarding predictors in lasso-type problems. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 74(2):245–266, 2010. ISSN 13697412. doi: 10.1111/j.1467-9868.2011.01004.x.
  32. Adaptive hybrid screening for efficient lasso optimization. Journal of Statistical Computation and Simulation, 92(11):2233–2256, 2022. doi: 10.1080/00949655.2021.2025376.
  33. Two-Layer Feature Reduction for Sparse-Group Lasso via Decomposition of Convex Sets. Advances in Neural Information Processing Systems, 3:2132–2140, 2014. ISSN 10495258.
  34. Lasso screening rules via dual Polytope Projection. In Proceedings of the 26th International Conference on Neural Information Processing Systems, volume 1, pages 1070–1078. Curran Associates Inc., 2013.
  35. Xiangrong Zeng and Mário A. T. Figueiredo. Decreasing weighted sorted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. IEEE Signal Processing Letters, 21:1240–1244, 2014a.
  36. Xiangrong Zeng and Mário A T Figueiredo. The atomic norm formulation of OSCAR regularization with application to the Frank-Wolfe algorithm. In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 780–784, 2014b.
  37. Hybrid safe–strong rules for efficient optimization in lasso-type problems. Computational Statistics & Data Analysis, 153:107063, 2021. ISSN 0167-9473. doi: https://doi.org/10.1016/j.csda.2020.107063.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com