Interpretability of Neuro_Symbolic embedding-based constraint methods
Determine whether hybrid Neuro_Symbolic methods that map symbolic logic rules onto embeddings as soft constraints or regularizers on a neural network’s loss function (such as logical tensor networks and deep ontology networks) compromise interpretability due to inference being governed by neural networks.
References
As the inference is still governed by NNs, it remains a research question whether this approach will compromise the interpretability.
— Towards Cognitive AI Systems: a Survey and Prospective on Neuro-Symbolic AI
(2401.01040 - Wan et al., 2 Jan 2024) in Section 2: Neuro-Symbolic AI Algorithms, Neuro_Symbolic approach paragraph