Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

COPEN: Probing Conceptual Knowledge in Pre-trained Language Models (2211.04079v1)

Published 8 Nov 2022 in cs.CL

Abstract: Conceptual knowledge is fundamental to human cognition and knowledge bases. However, existing knowledge probing works only focus on evaluating factual knowledge of pre-trained LLMs (PLMs) and ignore conceptual knowledge. Since conceptual knowledge often appears as implicit commonsense behind texts, designing probes for conceptual knowledge is hard. Inspired by knowledge representation schemata, we comprehensively evaluate conceptual knowledge of PLMs by designing three tasks to probe whether PLMs organize entities by conceptual similarities, learn conceptual properties, and conceptualize entities in contexts, respectively. For the tasks, we collect and annotate 24k data instances covering 393 concepts, which is COPEN, a COnceptual knowledge Probing bENchmark. Extensive experiments on different sizes and types of PLMs show that existing PLMs systematically lack conceptual knowledge and suffer from various spurious correlations. We believe this is a critical bottleneck for realizing human-like cognition in PLMs. COPEN and our codes are publicly released at https://github.com/THU-KEG/COPEN.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hao Peng (291 papers)
  2. Xiaozhi Wang (51 papers)
  3. Shengding Hu (34 papers)
  4. Hailong Jin (6 papers)
  5. Lei Hou (127 papers)
  6. Juanzi Li (144 papers)
  7. Zhiyuan Liu (433 papers)
  8. Qun Liu (230 papers)
Citations (21)