Training Classifiers with Natural Language Explanations (1805.03818v4)
Abstract: Training accurate classifiers requires many labels, but each label provides only limited information (one bit for binary classification). In this work, we propose BabbleLabble, a framework for training classifiers in which an annotator provides a natural language explanation for each labeling decision. A semantic parser converts these explanations into programmatic labeling functions that generate noisy labels for an arbitrary amount of unlabeled data, which is used to train a classifier. On three relation extraction tasks, we find that users are able to train classifiers with comparable F1 scores from 5-100$\times$ faster by providing explanations instead of just labels. Furthermore, given the inherent imperfection of labeling functions, we find that a simple rule-based semantic parser suffices.
- Braden Hancock (12 papers)
- Paroma Varma (6 papers)
- Stephanie Wang (18 papers)
- Martin Bringmann (3 papers)
- Percy Liang (239 papers)
- Christopher RĂ© (194 papers)