Natural Learning (2404.05903v1)
Abstract: We introduce Natural Learning (NL), a novel algorithm that elevates the explainability and interpretability of machine learning to an extreme level. NL simplifies decisions into intuitive rules, like "We rejected your loan because your income, employment status, and age collectively resemble a rejected prototype more than an accepted prototype." When applied to real-life datasets, NL produces impressive results. For example, in a colon cancer dataset with 1545 patients and 10935 genes, NL achieves 98.1% accuracy, comparable to DNNs and RF, by analyzing just 3 genes of test samples against 2 discovered prototypes. Similarly, in the UCI's WDBC dataset, NL achieves 98.3% accuracy using only 7 features and 2 prototypes. Even on the MNIST dataset (0 vs. 1), NL achieves 99.5% accuracy with only 3 pixels from 2 prototype images. NL is inspired by prototype theory, an old concept in cognitive psychology suggesting that people learn single sparse prototypes to categorize objects. Leveraging this relaxed assumption, we redesign Support Vector Machines (SVM), replacing its mathematical formulation with a fully nearest-neighbor-based solution, and to address the curse of dimensionality, we utilize locality-sensitive hashing. Following theory's generalizability principle, we propose a recursive method to prune non-core features. As a result, NL efficiently discovers the sparsest prototypes in O(n2pL) with high parallelization capacity in terms of n. Evaluation of NL with 17 benchmark datasets shows its significant outperformance compared to decision trees and logistic regression, two methods widely favored in healthcare for their interpretability. Moreover, NL achieves performance comparable to finetuned black-box models such as deep neural networks and random forests in 40% of cases, with only a 1-2% lower average accuracy. The code is available via http://natural-learning.cc.
- Prototype selection for interpretable classification. arXiv preprint arXiv:1202.5933, 2012.
- A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, pages 144–152, 1992.
- Leo Breiman. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical science, 16(3):199–231, 2001.
- Support-vector networks. Machine learning, 20:273–297, 1995.
- A neural algorithm for a fundamental computing problem. Science, 358(6364):793–796, 2017.
- Janez Demšar. Statistical comparisons of classifiers over multiple data sets. The Journal of Machine learning research, 7:1–30, 2006.
- Do we need hundreds of classifiers to solve real world classification problems? The journal of machine learning research, 15(1):3133–3181, 2014.
- Overcome support vector machine diagnosis overfitting. Cancer informatics, 13:CIN–S13875, 2014.
- Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pages 604–613, 1998.
- Pat Langley. The computational gauntlet of human-like learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 12268–12273, 2022.
- Cleannet: Transfer learning for scalable image classifier training with label noise. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5447–5456, 2018.
- A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
- J. Ross Quinlan. Induction of decision trees. Machine learning, 1:81–106, 1986.
- " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016.
- Eleanor H Rosch. Natural categories. Cognitive psychology, 4(3):328–350, 1973.
- Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5):206–215, 2019.
- Gilbert Ryle. Categories. In Proceedings of the Aristotelian Society, volume 38, pages 189–206. JSTOR, 1937.
- A path to simpler models starts with noise. Advances in Neural Information Processing Systems, 36, 2024.
- Selfie: Refurbishing unclean samples for robust deep learning. In International conference on machine learning, pages 5907–5915. PMLR, 2019.
- Learning from noisy labels with deep neural networks: A survey. IEEE transactions on neural networks and learning systems, 2022.
- Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proceedings of the National Academy of Sciences, 99(10):6567–6572, 2002.