Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interpretable Question Answering on Knowledge Bases and Text (1906.10924v1)

Published 26 Jun 2019 in cs.CL, cs.AI, and cs.LG

Abstract: Interpretability of ML models becomes more relevant with their increasing adoption. In this work, we address the interpretability of ML based question answering (QA) models on a combination of knowledge bases (KB) and text documents. We adapt post hoc explanation methods such as LIME and input perturbation (IP) and compare them with the self-explanatory attention mechanism of the model. For this purpose, we propose an automatic evaluation paradigm for explanation methods in the context of QA. We also conduct a study with human annotators to evaluate whether explanations help them identify better QA models. Our results suggest that IP provides better explanations than LIME or attention, according to both automatic and human evaluation. We obtain the same ranking of methods in both experiments, which supports the validity of our automatic evaluation paradigm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Alona Sydorova (1 paper)
  2. Nina Poerner (9 papers)
  3. Benjamin Roth (48 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.