Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fusing Context Into Knowledge Graph for Commonsense Question Answering (2012.04808v3)

Published 9 Dec 2020 in cs.CL

Abstract: Commonsense question answering (QA) requires a model to grasp commonsense and factual knowledge to answer questions about world events. Many prior methods couple LLMing with knowledge graphs (KG). However, although a KG contains rich structural information, it lacks the context to provide a more precise understanding of the concepts. This creates a gap when fusing knowledge graphs into LLMing, especially when there is insufficient labeled data. Thus, we propose to employ external entity descriptions to provide contextual information for knowledge understanding. We retrieve descriptions of related concepts from Wiktionary and feed them as additional input to pre-trained LLMs. The resulting model achieves state-of-the-art result in the CommonsenseQA dataset and the best result among non-generative models in OpenBookQA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yichong Xu (42 papers)
  2. Chenguang Zhu (100 papers)
  3. Ruochen Xu (35 papers)
  4. Yang Liu (2253 papers)
  5. Michael Zeng (76 papers)
  6. Xuedong Huang (22 papers)
Citations (64)