Papers
Topics
Authors
Recent
2000 character limit reached

Modeling Layered Consciousness with Multi-Agent Large Language Models (2510.17844v1)

Published 10 Oct 2025 in cs.CL, cs.AI, and cs.MA

Abstract: We propose a multi-agent framework for modeling artificial consciousness in LLMs, grounded in psychoanalytic theory. Our \textbf{Psychodynamic Model} simulates self-awareness, preconsciousness, and unconsciousness through agent interaction, guided by a Personalization Module combining fixed traits and dynamic needs. Using parameter-efficient fine-tuning on emotionally rich dialogues, the system was evaluated across eight personalized conditions. An LLM as a judge approach showed a 71.2\% preference for the fine-tuned model, with improved emotional depth and reduced output variance, demonstrating its potential for adaptive, personalized cognition.

Summary

  • The paper introduces a multi-agent framework that simulates distinct consciousness layers (self-awareness, preconsciousness, unconsciousness) to enhance AI personalization.
  • It employs independent agents with inter-agent reasoning and integrates fixed and flexible state components to capture stable traits and dynamic needs.
  • Evaluation results indicate that the fine-tuned model outperforms baselines with a 71.2% preference, demonstrating improved emotional depth and personalized responses.

Modeling Layered Consciousness with Multi-Agent LLMs

Introduction

The paper "Modeling Layered Consciousness with Multi-Agent LLMs" (2510.17844) introduces an innovative framework designed to simulate human-like consciousness through the use of LLMs. Grounded in psychoanalytic theory, the authors propose a multi-agent framework that models self-awareness, preconsciousness, and unconsciousness by deploying specialized agents that interact dynamically. This approach seeks to address the limitations of LLMs, which traditionally lack deep motivational states, unconscious drives, and consistent personality structures.

Framework Design

The framework comprises two main modules: the Consciousness Module and the Personalization Module. The Consciousness Module employs independent agents to represent different levels of consciousness—self-awareness, preconsciousness, and unconsciousness—and uses inter-agent reasoning to produce a Final Action. The Personalization Module, on the other hand, integrates Fixed State and Flexible State components to cater to both stable traits and dynamic needs. Figure 1

Figure 1: Overview of the Psychodynamic Multi-Agent Framework, highlighting the coordination of consciousness and personalization modules.

The Consciousness Module's design particularly emphasizes interaction among the three agents to mimic layers of human consciousness. Self-awareness is tasked with intentional reasoning, preconsciousness with social awareness, and unconsciousness with latent emotional expression. Figure 2

Figure 2: Overview of the psychodynamic model, illustrating agent operations in a provided scenario.

Evaluation and Results

The psychodynamic model was evaluated using an "LLM as a Judge" approach, where external LLMs assess the fidelity, personalization, and clarity of reasoning across various scenarios. Notably, the fine-tuned LLM consistently outperformed the baseline model, demonstrating 71.2% preference in evaluations, attributed to enhanced emotional depth and personalized response generation. Figure 3

Figure 3: Performance comparison showing the fine-tuned model's superiority across key areas.

The evaluation process involved the analysis of outputs under eight distinct conditions reflecting varying internal states, achieving improved accuracy and individualized responses when fine-tuning was applied to parameter-efficient setups. Figure 4

Figure 4: Comparative performance with input reflecting diverse needs and states showcasing enhanced model accuracy.

Implications and Future Work

By simulating psychodynamic processes, the framework lays foundational groundwork for AI systems to exhibit layered and context-sensitive consciousness. Future developments could explore broader applicability across varied character profiles and more complex interaction dynamics within multi-agent environments. As an ethical consideration, transparency and informed user consent must accompany practical implementations of such models to mitigate anthropomorphism and potential biases.

Conclusion

This research offers significant insights into integrating psychoanalytic constructs within LLM architectures to simulate human-like consciousness. The proposed model demonstrates promising capabilities in personalized cognition, which can enhance interaction quality in artificial agents, paving the way for further exploration and refinement in simulating human-like cognitive processes in AI systems.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 533 likes about this paper.