Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner (2205.09224v2)
Abstract: LLMs have achieved high performance on various question answering (QA) benchmarks, but the explainability of their output remains elusive. Structured explanations, called entailment trees, were recently suggested as a way to explain and inspect a QA system's answer. In order to better generate such entailment trees, we propose an architecture called Iterative Retrieval-Generation Reasoner (IRGR). Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises. The IRGR model iteratively searches for suitable premises, constructing a single entailment step at a time. Contrary to previous approaches, our method combines generation steps and retrieval of premises, allowing the model to leverage intermediate conclusions, and mitigating the input size limit of baseline encoder-decoder models. We conduct experiments using the EntailmentBank dataset, where we outperform existing benchmarks on premise retrieval and entailment tree generation, with around 300% gain in overall correctness.
- Danilo Ribeiro (4 papers)
- Shen Wang (111 papers)
- Xiaofei Ma (31 papers)
- Rui Dong (23 papers)
- Xiaokai Wei (14 papers)
- Henry Zhu (12 papers)
- Xinchi Chen (15 papers)
- Zhiheng Huang (33 papers)
- Peng Xu (357 papers)
- Andrew Arnold (14 papers)
- Dan Roth (222 papers)