Papers
Topics
Authors
Recent
Search
2000 character limit reached

Outlier-robust Estimation of a Sparse Linear Model Using Invexity

Published 22 Jun 2023 in cs.LG | (2306.12678v1)

Abstract: In this paper, we study problem of estimating a sparse regression vector with correct support in the presence of outlier samples. The inconsistency of lasso-type methods is well known in this scenario. We propose a combinatorial version of outlier-robust lasso which also identifies clean samples. Subsequently, we use these clean samples to make a good estimation. We also provide a novel invex relaxation for the combinatorial problem and provide provable theoretical guarantees for this relaxation. Finally, we conduct experiments to validate our theory and compare our results against standard lasso.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Fair sparse regression with clustering: An invex relaxation for a combinatorial problem. Advances in Neural Information Processing Systems, 34:23245–23257.
  2. Sparse mixed linear regression with guarantees: Taming an intractable problem with invex relaxation. International Conference on Machine Learning.
  3. What is invexity? The ANZIAM Journal, 28(1):1–9.
  4. Robust sparse regression under adversarial corruption. In International conference on machine learning, pages 774–782. PMLR.
  5. A convex formulation for mixed regression with two components: Minimax optimal rates. In Conference on Learning Theory, pages 560–604. PMLR.
  6. Outlier-robust estimation of a sparse linear model using l1-penalized huber’s m-estimator. Advances in neural information processing systems, 32.
  7. Estimating Diffusion Network Structures: Recovery Conditions, Sample Complexity & Soft-Thresholding Algorithm. In International Conference on Machine Learning, pages 793–801.
  8. Consistent estimation for pca and sparse regression with oblivious outliers. Advances in Neural Information Processing Systems, 34:25427–25438.
  9. Efficient online and batch learning using forward backward splitting. The Journal of Machine Learning Research, 10:2899–2934.
  10. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American statistical Association, 96(456):1348–1360.
  11. Sensitivity analysis in linear regression. John Wiley & Sons.
  12. Robust statistics: the approach based on influence functions. Wiley-Interscience; New York.
  13. Hanson, M. A. (1981). On sufficiency of the kuhn-tucker conditions. Journal of Mathematical Analysis and Applications, 80(2):545–550.
  14. Haynsworth, E. V. (1968). Determination of the inertia of a partitioned hermitian matrix. Linear algebra and its applications, 1(1):73–81.
  15. Matrix analysis. Cambridge university press.
  16. A tail inequality for quadratic forms of subgaussian random vectors. Electronic Communications in Probability, 17.
  17. Huber, P. J. (2011). Robust statistics. In International encyclopedia of statistical science, pages 1248–1251. Springer.
  18. Robust regression through the huber’s criterion and adaptive lasso penalty. Electronic Journal of Statistics, 5:1015–1053.
  19. High dimensional robust sparse regression. In International Conference on Artificial Intelligence and Statistics, pages 411–421. PMLR.
  20. Minimum distance lasso for robust high-dimensional regression. Electronic Journal of Statistics, 10(1):1296–1340.
  21. High-dimensional graphs and variable selection with the lasso. The annals of statistics, 34(3):1436–1462.
  22. Improved imaging by invex regularizers with global optima guarantees. arXiv preprint arXiv:2211.10112.
  23. Spam: Sparse Additive Models. In Proceedings of the 20th International Conference on Neural Information Processing Systems, pages 1201–1208. Curran Associates Inc.
  24. High-dimensional Ising Model Selection Using L1-Regularized Logistic Regression. The Annals of Statistics, 38(3):1287–1319.
  25. Adaptive huber regression. Journal of the American Statistical Association, 115(529):254–265.
  26. Vershynin, R. (2012). How close is the sample covariance matrix to the actual covariance matrix? Journal of Theoretical Probability, 25(3):655–686.
  27. Wainwright, M. J. (2009). Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using L1-Constrained Quadratic Programming (Lasso). IEEE transactions on information theory, 55(5):2183–2202.
  28. Wainwright, M. J. (2019). High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press.
  29. Alternating minimization for mixed linear regression. In International Conference on Machine Learning, pages 613–621. PMLR.
  30. Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American statistical association, 101(476):1418–1429.
Citations (3)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.