Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Causality Learning: A New Perspective for Interpretable Machine Learning (2006.16789v2)

Published 27 Jun 2020 in cs.LG, cs.AI, and stat.ML

Abstract: Recent years have witnessed the rapid growth of machine learning in a wide range of fields such as image recognition, text classification, credit scoring prediction, recommendation system, etc. In spite of their great performance in different sectors, researchers still concern about the mechanism under any ML techniques that are inherently black-box and becoming more complex to achieve higher accuracy. Therefore, interpreting machine learning model is currently a mainstream topic in the research community. However, the traditional interpretable machine learning focuses on the association instead of the causality. This paper provides an overview of causal analysis with the fundamental background and key concepts, and then summarizes most recent causal approaches for interpretable machine learning. The evaluation techniques for assessing method quality, and open problems in causal interpretability are also discussed in this paper.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Guandong Xu (93 papers)
  2. Tri Dung Duong (6 papers)
  3. Qian Li (236 papers)
  4. Shaowu Liu (3 papers)
  5. Xianzhi Wang (49 papers)
Citations (47)