Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner (2205.09224v2)

Published 18 May 2022 in cs.CL

Abstract: LLMs have achieved high performance on various question answering (QA) benchmarks, but the explainability of their output remains elusive. Structured explanations, called entailment trees, were recently suggested as a way to explain and inspect a QA system's answer. In order to better generate such entailment trees, we propose an architecture called Iterative Retrieval-Generation Reasoner (IRGR). Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises. The IRGR model iteratively searches for suitable premises, constructing a single entailment step at a time. Contrary to previous approaches, our method combines generation steps and retrieval of premises, allowing the model to leverage intermediate conclusions, and mitigating the input size limit of baseline encoder-decoder models. We conduct experiments using the EntailmentBank dataset, where we outperform existing benchmarks on premise retrieval and entailment tree generation, with around 300% gain in overall correctness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Danilo Ribeiro (4 papers)
  2. Shen Wang (111 papers)
  3. Xiaofei Ma (31 papers)
  4. Rui Dong (23 papers)
  5. Xiaokai Wei (14 papers)
  6. Henry Zhu (12 papers)
  7. Xinchi Chen (15 papers)
  8. Zhiheng Huang (33 papers)
  9. Peng Xu (357 papers)
  10. Andrew Arnold (14 papers)
  11. Dan Roth (222 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.