Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Is Algorithmic Stability Testable? A Unified Framework under Computational Constraints (2405.15107v2)

Published 23 May 2024 in stat.ML, cs.LG, math.ST, and stat.TH

Abstract: Algorithmic stability is a central notion in learning theory that quantifies the sensitivity of an algorithm to small changes in the training data. If a learning algorithm satisfies certain stability properties, this leads to many important downstream implications, such as generalization, robustness, and reliable predictive inference. Verifying that stability holds for a particular algorithm is therefore an important and practical question. However, recent results establish that testing the stability of a black-box algorithm is impossible, given limited data from an unknown distribution, in settings where the data lies in an uncountably infinite space (such as real-valued data). In this work, we extend this question to examine a far broader range of settings, where the data may lie in any space -- for example, categorical data. We develop a unified framework for quantifying the hardness of testing algorithmic stability, which establishes that across all settings, if the available data is limited then exhaustive search is essentially the only universally valid mechanism for certifying algorithmic stability. Since in practice, any test of stability would naturally be subject to computational constraints, exhaustive search is impossible and so this implies fundamental limits on our ability to test the stability property for a black-box algorithm.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Asymptotics of cross-validation. arXiv preprint arXiv:2001.11111.
  2. Predictive inference with the jackknife+. The Annals of Statistics, 49(1).
  3. Cross-validation confidence intervals for test error. Advances in Neural Information Processing Systems, 33:16339–16350.
  4. Interpolation of operators. Academic press.
  5. Stability and generalization. The Journal of Machine Learning Research, 2:499–526.
  6. Breiman, L. (1996). Bagging predictors. Machine learning, 24:123–140.
  7. Debiased machine learning without sample-splitting for stable estimators. Advances in Neural Information Processing Systems, 35:3096–3109.
  8. Toward better generalization bounds with locally elastic stability. In International Conference on Machine Learning, pages 2590–2600. PMLR.
  9. Distribution-free inequalities for the deleted and holdout error estimates. IEEE Transactions on Information Theory, 25(2):202–207.
  10. An introduction to the bootstrap. CRC press.
  11. Stability of randomized learning algorithms. Journal of Machine Learning Research, 6(1).
  12. Train faster, generalize better: Stability of stochastic gradient descent. In International conference on machine learning, pages 1225–1234. PMLR.
  13. Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. In Proceedings of the tenth annual conference on Computational learning theory, pages 152–162.
  14. Black-box tests for algorithmic stability. Information and Inference: A Journal of the IMA, 12(4):2690–2719.
  15. Black-box model confidence sets using cross-validation with high-dimensional gaussian comparison. arXiv preprint arXiv:2211.04958.
  16. Fine-grained analysis of stability and generalization for stochastic gradient descent. In International Conference on Machine Learning, pages 5809–5819. PMLR.
  17. Algorithmic stability implies training-conditional coverage for distribution-free prediction methods. arXiv preprint arXiv:2311.04295.
  18. Stability selection. Journal of the Royal Statistical Society Series B: Statistical Methodology, 72(4):417–473.
  19. General conditions for predictivity in learning theory. Nature, 428(6981):419–422.
  20. Derandomizing knockoffs. Journal of the American Statistical Association, 118(542):948–958.
  21. A finite sample distribution-free performance bound for local discrimination rules. The Annals of Statistics, pages 506–514.
  22. Variable selection with error control: another look at stability selection. Journal of the Royal Statistical Society Series B: Statistical Methodology, 75(1):55–80.
  23. Learnability, stability and uniform convergence. The Journal of Machine Learning Research, 11:2635–2670.
  24. Bagging provides assumption-free stability. Journal of Machine Learning Research, 25(131):1–35.
  25. Conditional predictive inference for high-dimensional stable algorithms. The Annals of Statistics, 51(1):290 – 311.
  26. Lipschitz regularity of deep neural networks: analysis and efficient estimation. Advances in Neural Information Processing Systems, 31.
  27. Yu, B. (2013). Stability. Bernoulli, 19(4):1484–1500.
  28. Post-selection inference via algorithmic stability. The Annals of Statistics, 51(4):1666–1691.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com