Papers
Topics
Authors
Recent
2000 character limit reached

Differentiable Rule Induction with Learned Relational Features

Published 17 Jan 2022 in stat.ML, cs.LG, and stat.ME | (2201.06515v2)

Abstract: Rule-based decision models are attractive due to their interpretability. However, existing rule induction methods often result in long and consequently less interpretable rule models. This problem can often be attributed to the lack of appropriately expressive vocabulary, i.e., relevant predicates used as literals in the decision model. Most existing rule induction algorithms presume pre-defined literals, naturally decoupling the definition of the literals from the rule learning phase. In contrast, we propose the Relational Rule Network (R2N), a neural architecture that learns literals that represent a linear relationship among numerical input features along with the rules that use them. This approach opens the door to increasing the expressiveness of induced decision models by coupling literal learning directly with rule learning in an end-to-end differentiable fashion. On benchmark tasks, we show that these learned literals are simple enough to retain interpretability, yet improve prediction accuracy and provide sets of rules that are more concise compared to state-of-the-art rule induction algorithms.

Citations (13)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.