Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis (2308.09830v3)

Published 18 Aug 2023 in cs.AI

Abstract: This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: LLMs and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Oscar J. Romero (7 papers)
  2. John Zimmerman (17 papers)
  3. Aaron Steinfeld (17 papers)
  4. Anthony Tomasic (8 papers)
Citations (13)