Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Models as a Knowledge Source for Cognitive Agents (2109.08270v3)

Published 17 Sep 2021 in cs.AI and cs.CL

Abstract: LLMs (LMs) are sentence-completion engines trained on massive corpora. LMs have emerged as a significant breakthrough in natural-language processing, providing capabilities that go far beyond sentence completion including question answering, summarization, and natural-language inference. While many of these capabilities have potential application to cognitive systems, exploiting LLMs as a source of task knowledge, especially for task learning, offers significant, near-term benefits. We introduce LLMs and the various tasks to which they have been applied and then review methods of knowledge extraction from LLMs. The resulting analysis outlines both the challenges and opportunities for using LLMs as a new knowledge source for cognitive systems. It also identifies possible ways to improve knowledge extraction from LLMs using the capabilities provided by cognitive systems. Central to success will be the ability of a cognitive agent to itself learn an abstract model of the knowledge implicit in the LM as well as methods to extract high-quality knowledge effectively and efficiently. To illustrate, we introduce a hypothetical robot agent and describe how LLMs could extend its task knowledge and improve its performance and the kinds of knowledge and methods the agent can use to exploit the knowledge within a LLM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Robert E. Wray, III (1 paper)
  2. James R. Kirk (8 papers)
  3. John E. Laird (15 papers)
Citations (15)