Dice Question Streamline Icon: https://streamlinehq.com

Interpretability of Neuro_Symbolic embedding-based constraint methods

Determine whether hybrid Neuro_Symbolic methods that map symbolic logic rules onto embeddings as soft constraints or regularizers on a neural network’s loss function (such as logical tensor networks and deep ontology networks) compromise interpretability due to inference being governed by neural networks.

Information Square Streamline Icon: https://streamlinehq.com

Background

The Neuro_Symbolic approach described in the paper encodes symbolic logic rules into vector embeddings that act as soft constraints or regularizers on neural network objectives. Examples include logical tensor networks and deep ontology networks, which have shown success in tasks like knowledge graph completion.

While these models incorporate symbolic information, the paper notes that inference is ultimately governed by neural networks, raising concerns about whether this paradigm undermines the interpretability that symbolic reasoning typically offers. The authors explicitly state that this question remains unresolved.

References

As the inference is still governed by NNs, it remains a research question whether this approach will compromise the interpretability.

Towards Cognitive AI Systems: a Survey and Prospective on Neuro-Symbolic AI (2401.01040 - Wan et al., 2 Jan 2024) in Section 2: Neuro-Symbolic AI Algorithms, Neuro_Symbolic approach paragraph