Abductive Commonsense Reasoning Exploiting Mutually Exclusive Explanations (2305.14618v1)
Abstract: Abductive reasoning aims to find plausible explanations for an event. This style of reasoning is critical for commonsense tasks where there are often multiple plausible explanations. Existing approaches for abductive reasoning in NLP often rely on manually generated annotations for supervision; however, such annotations can be subjective and biased. Instead of using direct supervision, this work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context. The method uses posterior regularization to enforce a mutual exclusion constraint, encouraging the model to learn the distinction between fluent explanations and plausible ones. We evaluate our approach on a diverse set of abductive reasoning datasets; experimental results show that our approach outperforms or is comparable to directly applying pretrained LLMs in a zero-shot manner and other knowledge-augmented zero-shot methods.
- Wenting Zhao (44 papers)
- Justin T. Chiu (16 papers)
- Claire Cardie (74 papers)
- Alexander M. Rush (115 papers)