Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Few-shot In-context Learning for Knowledge Base Question Answering (2305.01750v2)

Published 2 May 2023 in cs.CL and cs.AI

Abstract: Question answering over knowledge bases is considered a difficult problem due to the challenge of generalizing to a wide variety of possible natural language questions. Additionally, the heterogeneity of knowledge base schema items between different knowledge bases often necessitates specialized training for different knowledge base question-answering (KBQA) datasets. To handle questions over diverse KBQA datasets with a unified training-free framework, we propose KB-BINDER, which for the first time enables few-shot in-context learning over KBQA tasks. Firstly, KB-BINDER leverages LLMs like Codex to generate logical forms as the draft for a specific question by imitating a few demonstrations. Secondly, KB-BINDER grounds on the knowledge base to bind the generated draft to an executable one with BM25 score matching. The experimental results on four public heterogeneous KBQA datasets show that KB-BINDER can achieve a strong performance with only a few in-context demonstrations. Especially on GraphQA and 3-hop MetaQA, KB-BINDER can even outperform the state-of-the-art trained models. On GrailQA and WebQSP, our model is also on par with other fully-trained models. We believe KB-BINDER can serve as an important baseline for future research. Our code is available at https://github.com/ltl3A87/KB-BINDER.

Few-shot In-context Learning for Knowledge Base Question Answering

The paper entitled "Few-shot In-context Learning for Knowledge Base Question Answering" presents a novel approach to address the challenges of answering questions over knowledge bases (KBQA) through a training-free framework called KB-BINDER. This method leverages the capabilities of LLMs, such as Codex, to facilitate the process of generating logical forms and binding these forms to executable queries over diverse knowledge bases. Given the historical challenges associated with KBQA, particularly with adapting models to various KB schemas and the need for extensive annotated training data, the KB-BINDER framework offers a promising alternative by targeting few-shot in-context learning without requiring specialized training for each new knowledge base schema.

The methodology outlined in the paper involves several critical stages. The first stage utilizes LLMs to generate logical forms as preliminary drafts for questions, leveraging a few demonstrated examples. This draft creation capitalizes on the inherent generalizability and reasoning strengths of models like Codex to produce reasonable structural representations for unseen questions. The next stage involves binding these drafts to an actual knowledge base using a lexicon-based similarity search and BM25 score matching to refine and execute these drafts as logical forms. This approach emulates the structure and logic of fully trained systems but requires significantly less initial data input, which is particularly beneficial in low-resource settings.

Experimentally, KB-BINDER demonstrates robust performance across several KBQA datasets, namely GrailQA, WebQSP, GraphQA, and MetaQA, demonstrating its efficacy when compared to fully trained state-of-the-art models. Notably, it achieves higher F1 scores than previous models on GraphQA and MetaQA, showcasing its capability in domain-specific and compositional generalization scenarios. Moreover, introducing variation in exemplars and applying self-consistency through majority voting enhances the overall performance of the framework, as evidenced by the improved results with KB-BINDER(K)-R.

The implications of these findings address some fundamental issues faced in KBQA. KB-BINDER proposes a viable solution for rapidly deploying KBQA systems across various domains and KB schemas without the profound resource investment typically required for training domain-specific models. This capability suggests potential for KB-BINDER to serve as a baseline for future research, especially concerning zero-shot and few-shot learning applications in knowledge management. Practically, KB-BINDER’s unified approach could simplify the integration of KBQA systems in real-world settings, allowing for dynamically adaptable systems that do not rely heavily on pre-existing data. Theoretically, it pushes forward the understanding and use of LLMs beyond traditional applications, hinting at broader fields where they may be effectively implemented with minimal training.

Future developments may explore enhanced exemplar retrieval mechanisms and instruction integration to further refine model outputs and logical form generation. These enhancements could address some limitations outlined in the binding process, potentially increasing the performance consistency across varying types of questions and domains. Continuing to build on these insights could bolster the exploration of scalable KBQA solutions, with potential applications across AI and NLP fields.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tianle Li (25 papers)
  2. Xueguang Ma (36 papers)
  3. Alex Zhuang (5 papers)
  4. Yu Gu (218 papers)
  5. Yu Su (138 papers)
  6. Wenhu Chen (134 papers)
Citations (61)
Github Logo Streamline Icon: https://streamlinehq.com