Explanation-by-Example Based on Item Response Theory (2210.01638v1)
Abstract: Intelligent systems that use Machine Learning classification algorithms are increasingly common in everyday society. However, many systems use black-box models that do not have characteristics that allow for self-explanation of their predictions. This situation leads researchers in the field and society to the following question: How can I trust the prediction of a model I cannot understand? In this sense, XAI emerges as a field of AI that aims to create techniques capable of explaining the decisions of the classifier to the end-user. As a result, several techniques have emerged, such as Explanation-by-Example, which has a few initiatives consolidated by the community currently working with XAI. This research explores the Item Response Theory (IRT) as a tool to explaining the models and measuring the level of reliability of the Explanation-by-Example approach. To this end, four datasets with different levels of complexity were used, and the Random Forest model was used as a hypothesis test. From the test set, 83.8% of the errors are from instances in which the IRT points out the model as unreliable.
- Lucas F. F. Cardoso (3 papers)
- José de S. Ribeiro (1 paper)
- Vitor C. A. Santos (3 papers)
- Raíssa L. Silva (1 paper)
- Marcelle P. Mota (1 paper)
- Ronnie C. O. Alves (4 papers)
- Ricardo B. C. Prudêncio (5 papers)