Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Compositional Rules via Neural Program Synthesis (2003.05562v2)

Published 12 Mar 2020 in cs.AI and cs.LG

Abstract: Many aspects of human reasoning, including language, require learning rules from very little data. Humans can do this, often learning systematic rules from very few examples, and combining these rules to form compositional rule-based systems. Current neural architectures, on the other hand, often fail to generalize in a compositional manner, especially when evaluated in ways that vary systematically from training. In this work, we present a neuro-symbolic model which learns entire rule systems from a small set of examples. Instead of directly predicting outputs from inputs, we train our model to induce the explicit system of rules governing a set of previously seen examples, drawing upon techniques from the neural program synthesis literature. Our rule-synthesis approach outperforms neural meta-learning techniques in three domains: an artificial instruction-learning domain used to evaluate human learning, the SCAN challenge datasets, and learning rule-based translations of number words into integers for a wide range of human languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Maxwell I. Nye (2 papers)
  2. Armando Solar-Lezama (65 papers)
  3. Joshua B. Tenenbaum (257 papers)
  4. Brenden M. Lake (41 papers)
Citations (112)

Summary

We haven't generated a summary for this paper yet.