Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Do I have the Knowledge to Answer? Investigating Answerability of Knowledge Base Questions (2212.10189v2)

Published 20 Dec 2022 in cs.CL and cs.AI

Abstract: When answering natural language questions over knowledge bases, missing facts, incomplete schema and limited scope naturally lead to many questions being unanswerable. While answerability has been explored in other QA settings, it has not been studied for QA over knowledge bases (KBQA). We create GrailQAbility, a new benchmark KBQA dataset with unanswerability, by first identifying various forms of KB incompleteness that make questions unanswerable, and then systematically adapting GrailQA (a popular KBQA dataset with only answerable questions). Experimenting with three state-of-the-art KBQA models, we find that all three models suffer a drop in performance even after suitable adaptation for unanswerable questions. In addition, these often detect unanswerability for wrong reasons and find specific forms of unanswerability particularly difficult to handle. This underscores the need for further research in making KBQA systems robust to unanswerability

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mayur Patidar (4 papers)
  2. Prayushi Faldu (3 papers)
  3. Avinash Singh (86 papers)
  4. Lovekesh Vig (78 papers)
  5. Indrajit Bhattacharya (13 papers)
  6. Mausam (69 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.