Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Alleviating the Knowledge-Language Inconsistency: A Study for Deep Commonsense Knowledge (2105.13607v2)

Published 28 May 2021 in cs.CL

Abstract: Knowledge facts are typically represented by relational triples, while we observe that some commonsense facts are represented by the triples whose forms are inconsistent with the expression of language. This inconsistency puts forward a challenge for pre-trained LLMs to deal with these commonsense knowledge facts. In this paper, we term such knowledge as deep commonsense knowledge and conduct extensive exploratory experiments on it. We show that deep commonsense knowledge occupies a significant part of commonsense knowledge while conventional methods fail to capture it effectively. We further propose a novel method to mine the deep commonsense knowledge distributed in sentences, alleviating the reliance of conventional methods on the triple representation form of knowledge. Experiments demonstrate that the proposal significantly improves the performance in mining deep commonsense knowledge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yi Zhang (994 papers)
  2. Lei Li (1293 papers)
  3. Yunfang Wu (50 papers)
  4. Qi Su (58 papers)
  5. Xu Sun (194 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.