Knowledge Distillation of Convolutional Neural Networks through Feature Map Transformation using Decision Trees (2403.06089v1)
Abstract: The interpretation of reasoning by Deep Neural Networks (DNN) is still challenging due to their perceived black-box nature. Therefore, deploying DNNs in several real-world tasks is restricted by the lack of transparency of these models. We propose a distillation approach by extracting features from the final layer of the convolutional neural network (CNN) to address insights to its reasoning. The feature maps in the final layer of a CNN are transformed into a one-dimensional feature vector using a fully connected layer. Subsequently, the extracted features are used to train a decision tree to achieve the best accuracy under constraints of depth and nodes. We use the medical images of dermaMNIST, octMNIST, and pneumoniaMNIST from the medical MNIST datasets to demonstrate our proposed work. We observed that performance of the decision tree is as good as a CNN with minimum complexity. The results encourage interpreting decisions made by the CNNs using decision trees.
- Rectified decision trees: Towards interpretability, compression and empirical soundness. arXiv preprint arXiv:1903.05965, 2019.
- Distilling a neural network into a soft decision tree. arXiv preprint arXiv:1711.09784, 2017.
- Darpa’s explainable artificial intelligence program. AI Magazine, 40(2):44, 2019.
- Improving the interpretability of deep neural networks with knowledge distillation. In 2018 IEEE International Conference on Data Mining Workshops (ICDMW), pages 905–912. IEEE, 2018.
- Adaptive neural trees. In International Conference on Machine Learning, pages 6166–6175. PMLR, 2019.
- Medmnist v2: A large-scale lightweight benchmark for 2d and 3d biomedical image classification. arXiv preprint arXiv:2110.14795, 2021.
- Maddimsetti Srinivas (1 paper)
- Debdoot Sheet (32 papers)