Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge (2305.05976v2)

Published 10 May 2023 in cs.CL

Abstract: LLMs have been widely studied for their ability to store and utilize positive knowledge. However, negative knowledge, such as "lions don't live in the ocean", is also ubiquitous in the world but rarely mentioned explicitly in the text. What do LLMs know about negative knowledge? This work examines the ability of LLMs to negative commonsense knowledge. We design a constrained keywords-to-sentence generation task (CG) and a Boolean question-answering task (QA) to probe LLMs. Our experiments reveal that LLMs frequently fail to generate valid sentences grounded in negative commonsense knowledge, yet they can correctly answer polar yes-or-no questions. We term this phenomenon the belief conflict of LLMs. Our further analysis shows that statistical shortcuts and negation reporting bias from LLMing pre-training cause this conflict.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jiangjie Chen (46 papers)
  2. Wei Shi (116 papers)
  3. Ziquan Fu (5 papers)
  4. Sijie Cheng (23 papers)
  5. Lei Li (1293 papers)
  6. Yanghua Xiao (151 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.