A Voting Approach for Explainable Classification with Rule Learning (2311.07323v2)
Abstract: State-of-the-art results in typical classification tasks are mostly achieved by unexplainable machine learning methods, like deep neural networks, for instance. Contrarily, in this paper, we investigate the application of rule learning methods in such a context. Thus, classifications become based on comprehensible (first-order) rules, explaining the predictions made. In general, however, rule-based classifications are less accurate than state-of-the-art results (often significantly). As main contribution, we introduce a voting approach combining both worlds, aiming to achieve comparable results as (unexplainable) state-of-the-art methods, while still providing explanations in the form of deterministic rules. Considering a variety of benchmark data sets including a use case of significant interest to insurance industries, we prove that our approach not only clearly outperforms ordinary rule learning methods, but also yields results on a par with state-of-the-art outcomes.
- Cohen, W.W.: Fast effective rule induction. In: Machine Learning Proceedings 1995. pp. 115–123. San Francisco (CA) (1995). https://doi.org/10.1016/b978-1-55860-377-6.50023-2
- Deng, H.: Interpreting tree ensembles with intrees. International Journal of Data Science and Analytics (2019). https://doi.org/10.1007/s41060-018-0144-8
- Kirkpatrick, K.: Still waiting for self-driving cars. CACM 65(4) (2022). https://doi.org/https://dl.acm.org/doi/10.1145/3516517
- Rivest, R.: Learning decision lists. Machine Learning 2 (09 2001). https://doi.org/10.1007/BF00058680