Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explanation of Machine Learning Models of Colon Cancer Using SHAP Considering Interaction Effects (2208.03112v1)

Published 5 Aug 2022 in cs.LG

Abstract: When using machine learning techniques in decision-making processes, the interpretability of the models is important. Shapley additive explanation (SHAP) is one of the most promising interpretation methods for machine learning models. Interaction effects occur when the effect of one variable depends on the value of another variable. Even if each variable has little effect on the outcome, its combination can have an unexpectedly large impact on the outcome. Understanding interactions is important for understanding machine learning models; however, naive SHAP analysis cannot distinguish between the main effect and interaction effects. In this paper, we introduce the Shapley-Taylor index as an interpretation method for machine learning models using SHAP considering interaction effects. We apply the method to the cancer cohort data of Kyushu University Hospital (N=29,080) to analyze what combination of factors contributes to the risk of colon cancer.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yasunobu Nohara (2 papers)
  2. Toyoshi Inoguchi (1 paper)
  3. Chinatsu Nojiri (1 paper)
  4. Naoki Nakashima (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.