Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SurvBeNIM: The Beran-Based Neural Importance Model for Explaining the Survival Models (2312.06638v1)

Published 11 Dec 2023 in cs.LG, cs.AI, and stat.ML

Abstract: A new method called the Survival Beran-based Neural Importance Model (SurvBeNIM) is proposed. It aims to explain predictions of machine learning survival models, which are in the form of survival or cumulative hazard functions. The main idea behind SurvBeNIM is to extend the Beran estimator by incorporating the importance functions into its kernels and by implementing these importance functions as a set of neural networks which are jointly trained in an end-to-end manner. Two strategies of using and training the whole neural network implementing SurvBeNIM are proposed. The first one explains a single instance, and the neural network is trained for each explained instance. According to the second strategy, the neural network only learns once on all instances from the dataset and on all generated instances. Then the neural network is used to explain any instance in a dataset domain. Various numerical experiments compare the method with different existing explanation methods. A code implementing the proposed method is publicly available.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (79)
  1. Applied Survival Analysis: Regression Modeling of Time to Event Data. John Wiley & Sons, New Jersey, 2008.
  2. Machine learning for survival analysis: A survey. ACM Computing Surveys (CSUR), 51(6):1–36, 2019.
  3. A deep active survival analysis approach for precision treatment recommendations: Application of prostate cancer. arXiv:1804.03280v1, April 2018.
  4. D.R. Cox. Regression models and life-tables. Journal of the Royal Statistical Society, Series B (Methodological), 34(2):187–220, 1972.
  5. D.M. Witten and R. Tibshirani. Survival analysis with high-dimensional covariates. Statistical Methods in Medical Research, 19(1):29–51, 2010.
  6. Deep convolutional neural network for survival analysis with pathological images. In 2016 IEEE International Conference on Bioinformatics and Biomedicine, pages 544–547. IEEE, 2016.
  7. Decision tree for competing risks survival probability in breast cancer study. International Journal Of Biological and Medical Research, 3(1):25–29, 2008.
  8. Evaluating random forests for survival analysis using prediction error curves. Journal of Statistical Software, 50(11):1–23, 2012.
  9. On the use of harrell’s c for clinical risk prediction via random survival forests. Expert Systems with Applications, 63:450–459, 2016.
  10. H. Wang and L. Zhou. Random survival forest with space extensions for censored data. Artificial intelligence in medicine, 79:52–61, 2017.
  11. Unbiased split variable selection for random survival forests using maximally selected rank statistics. Statistics in Medicine, 36(8):1272–1284, 2017.
  12. Image-based survival analysis for lung cancer patients using CNNs. arXiv:1808.09679v1, Aug 2018.
  13. Deepsurv: Personalized treatment recommender system using a Cox proportional hazards deep neural network. BMC medical research methodology, 18(24):1–12, 2018.
  14. Deep learning for survival analysis: A review. arXiv:2305.14961, May 2023.
  15. Causability and explainability of artificial intelligence in medicine. WIREs Data Mining and Knowledge Discovery, 9(4):e1312, 2019.
  16. A. Adadi and M. Berrada. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6:52138–52160, 2018.
  17. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58:82–115, 2020.
  18. Benchmarking and survey of explanation methods for black box models. Data Mining and Knowledge Discovery, 2023.
  19. N. Burkart and M.F. Huber. A survey on the explainability of supervised machine learning. Journal of Artificial Intelligence Research, 70:245–317, 2021.
  20. Machine learning interpretability: A survey on methods and metrics. Electronics, 8(832):1–34, 2019.
  21. A comprehensive framework for evaluating time to event predictions using the restricted mean survival time. arXiv:2306.16075, Jun 2023.
  22. A survey of methods for explaining black box models. ACM computing surveys, 51(5):93, 2019.
  23. R. Guidotti. Evaluating local explanation methods on ground truth. Artificial Intelligence, 291(103428):1–16, 2021.
  24. C. Molnar. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Published online, https://christophm.github.io/interpretable-ml-book/, 2019.
  25. Definitions, methods, and applications in interpretable machine learning. In Proceedings of the National Academy of Sciences, volume 116, pages 22071–22080, 2019.
  26. Explainable Artificial Intelligence (XAI): Motivation, Terminology, and Taxonomy, pages 971–985. Springer International Publishing, Cham, 2023.
  27. C. Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1:206–215, 2019.
  28. “Why should I trust You?” Explaining the predictions of any classifier. arXiv:1602.04938v3, Aug 2016.
  29. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, pages 4765–4774, 2017.
  30. E. Strumbelj and I. Kononenko. An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research, 11:1–18, 2010.
  31. SurvLIME: A method for explaining machine learning survival models. Knowledge-Based Systems, 203:106164, 2020.
  32. An explanation method for black-box machine learning survival models using the chebyshev distance. In Artificial Intelligence and Natural Language. AINL 2020, volume 1292 of Communications in Computer and Information Science, pages 62–74, Cham, 2020. Springer.
  33. SurvSHAP (t): Time-dependent explanations of machine learning survival models. Knowledge-Based Systems, 262:110234, 2023.
  34. SurvNAM: The machine learning survival model explanation. Neural Networks, 147:81–102, 2022.
  35. Neural additive models: Interpretable machine learning with neural nets. In 35th Conference on Neural Information Processing Systems (NeurIPS 2021), volume 34, pages 4699–4711. Curran Associates, Inc., 2021.
  36. T. Hastie and R. Tibshirani. Generalized additive models, volume 43. CRC press, 1990.
  37. SurvBeX: An explanation method of the machine learning survival models based on the Beran estimator. arXiv:2308.03730, Aug 2023.
  38. R. Beran. Nonparametric regression with randomly censored survival data. Technical report, University of California, Berkeley, 1981.
  39. Anchors: High-precision model-agnostic explanations. In AAAI Conference on Artificial Intelligence, pages 1527–1535, 2018.
  40. S.M. Shankaranarayana and D. Runje. ALIME: Autoencoder based approach for local interpretability. arXiv:1909.02437, Sep 2019.
  41. DLIME: A deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. arXiv:1906.10263, Jun 2019.
  42. GraphLIME: Local interpretable model explanations for graph neural networks. arXiv:2001.06216, January 2020.
  43. Enriching visual with verbal explanations for relational concepts: Combining LIME with Aleph. arXiv:1910.01837v1, October 2019.
  44. Locally interpretable models and effects based on supervised partitioning (LIME-SUP). arXiv:1806.00663, Jun 2018.
  45. NormLime: A new feature importance metric for explaining deep neural networks. arXiv:1909.04200, Sep 2019.
  46. Rank-lime: Local model-agnostic feature attribution for learning to rank. arXiv:2212.12722, Dec 2022.
  47. s-lime: Reconciling locality and fidelity in linear explanations. arXiv:2208.01510, Aug 2022.
  48. D. Garreau. Theoretical analysis of LIME. In Explainable Deep Learning AI, pages 293–316. Elsevier, 2023.
  49. D. Garreau and U. von Luxburg. Explaining the explainer: A first theoretical analysis of LIME. In International conference on artificial intelligence and statistics, pages 1287–1296. PMLR, 2020.
  50. D. Garreau and U. von Luxburg. Looking deeper into tabular LIME. arXiv:2008.11092, August 2020.
  51. D. Garreau and D. Mardaoui. What does LIME really see in images? In International conference on machine learning, pages 3620–3629. PMLR, 2021.
  52. K-shap: Policy clustering algorithm for anonymous multi-agent state-action pairs. arXiv:2302.11996, Feb 2023.
  53. Heewoo Jun and A. Nichol. Shap-e: Generating conditional 3d implicit functions. arXiv:2305.02463, May 2023.
  54. Shap-iq: Unified approximation of any-order shapley interactions. arXiv:2303.01179, Mar 2023.
  55. Ensembles of random shaps. Algorithms, 15(11):431, 2022.
  56. Latent shap: Toward practical human-interpretable explanations. arXiv:2211.14797, Nov 2022.
  57. L. Parcalabescu and A. Frank. Mm-shap: A performance-agnostic metric for measuring multimodal contributions in vision and language models and tasks. arXiv:2212.08158, Dec 2022.
  58. On the tractability of shap explanations. Journal of Artificial Intelligence Research, 74:851–886, 2022.
  59. Neural additive models: Interpretable machine learning with neural nets. arXiv:2004.13912, April 2020.
  60. How interpretable and trustworthy are gams? arXiv:2006.06466, June 2020.
  61. Adaptive explainable neural networks (AxNNs). arXiv:2004.02353v2, April 2020.
  62. Interpretable machine learning with an ensemble of gradient boosting machines. Knowledge-Based Systems, 222(106993):1–16, 2021.
  63. Intelligible models for classification and regression. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 150–158. ACM, August 2012.
  64. InterpretML: A unified framework for machine learning interpretability. arXiv:1909.09223, September 2019.
  65. Neural basis models for interpretability. Advances in Neural Information Processing Systems, 35:8414–8426, 2022.
  66. Gami-net: An explainable neural networkbased on generalized additive models with structured interactions. arXiv:2003.07132, March 2020.
  67. Axiomatic interpretability for multiclass additive models. In In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 226–234. ACM, 2019.
  68. Explainability as statistical inference. In International Conference on Machine Learning, pages 30584–30612. PMLR, 2023.
  69. Counterfactual explanation of machine learning survival models. Informatica, 32(4):817–847, 2021.
  70. Uncertainty interpretation of the machine learning survival model predictions. IEEE Access, 9:120158–120175, 2021.
  71. SurvLIMEpy: A python package implementing SurvLIME. arXiv:2302.10571, Feb 2023.
  72. Extending the neural additive model for survival analysis with ehr data. arXiv:2211.07814, Nov 2022.
  73. Explainable censored learning: Finding critical features with long term prognostic values for survival prediction. arXiv:2209.15450, Sep 2022.
  74. Explainable machine learning can outperform cox regression predictions and provide insights in breast cancer survival. Scientific reports, 11(1):6968, 2021.
  75. Evaluating the yield of medical tests. Journal of the American Medical Association, 247:2543–2546, 1982.
  76. On the c-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data. Statistics in medicine, 30(10):1105–1117, 2011.
  77. Generating survival times to simulate cox proportional hazards models. Statistics in Medicine, 24(11):1713–1723, 2005.
  78. J. Kalbfleisch and R. Prentice. The Statistical Analysis of Failure Time Data. John Wiley and Sons, New York, 1980.
  79. W. Sauerbrei and P. Royston. Building multivariable prognostic and diagnostic models: transformation of the predictors by using fractional polynomials. Journal of the Royal Statistics Society Series A, 162(1):71–94, 1999.

Summary

We haven't generated a summary for this paper yet.