Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Review on Language Models as Knowledge Bases (2204.06031v1)

Published 12 Apr 2022 in cs.CL and cs.AI

Abstract: Recently, there has been a surge of interest in the NLP community on the use of pretrained LLMs (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs trained on a sufficiently large (web) corpus will encode a significant amount of knowledge implicitly in its parameters. The resulting LM can be probed for different kinds of knowledge and thus acting as a KB. This has a major advantage over traditional KBs in that this method requires no human supervision. In this paper, we present a set of aspects that we deem a LM should have to fully act as a KB, and review the recent literature with respect to those aspects.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Badr AlKhamissi (24 papers)
  2. Millicent Li (3 papers)
  3. Asli Celikyilmaz (80 papers)
  4. Mona Diab (71 papers)
  5. Marjan Ghazvininejad (33 papers)
Citations (155)