Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Model Analysis for Ontology Subsumption Inference (2302.06761v3)

Published 14 Feb 2023 in cs.CL, cs.AI, and cs.LO

Abstract: Investigating whether pre-trained LLMs (LMs) can function as knowledge bases (KBs) has raised wide research interests recently. However, existing works focus on simple, triple-based, relational KBs, but omit more sophisticated, logic-based, conceptualised KBs such as OWL ontologies. To investigate an LM's knowledge of ontologies, we propose OntoLAMA, a set of inference-based probing tasks and datasets from ontology subsumption axioms involving both atomic and complex concepts. We conduct extensive experiments on ontologies of different domains and scales, and our results demonstrate that LMs encode relatively less background knowledge of Subsumption Inference (SI) than traditional Natural Language Inference (NLI) but can improve on SI significantly when a small number of samples are given. We will open-source our code and datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yuan He (156 papers)
  2. Jiaoyan Chen (85 papers)
  3. Ernesto Jiménez-Ruiz (38 papers)
  4. Hang Dong (65 papers)
  5. Ian Horrocks (50 papers)
Citations (18)