- The paper introduces a multi-agent framework that simulates distinct consciousness layers (self-awareness, preconsciousness, unconsciousness) to enhance AI personalization.
- It employs independent agents with inter-agent reasoning and integrates fixed and flexible state components to capture stable traits and dynamic needs.
- Evaluation results indicate that the fine-tuned model outperforms baselines with a 71.2% preference, demonstrating improved emotional depth and personalized responses.
Modeling Layered Consciousness with Multi-Agent LLMs
Introduction
The paper "Modeling Layered Consciousness with Multi-Agent LLMs" (2510.17844) introduces an innovative framework designed to simulate human-like consciousness through the use of LLMs. Grounded in psychoanalytic theory, the authors propose a multi-agent framework that models self-awareness, preconsciousness, and unconsciousness by deploying specialized agents that interact dynamically. This approach seeks to address the limitations of LLMs, which traditionally lack deep motivational states, unconscious drives, and consistent personality structures.
Framework Design
The framework comprises two main modules: the Consciousness Module and the Personalization Module. The Consciousness Module employs independent agents to represent different levels of consciousness—self-awareness, preconsciousness, and unconsciousness—and uses inter-agent reasoning to produce a Final Action. The Personalization Module, on the other hand, integrates Fixed State and Flexible State components to cater to both stable traits and dynamic needs.
Figure 1: Overview of the Psychodynamic Multi-Agent Framework, highlighting the coordination of consciousness and personalization modules.
The Consciousness Module's design particularly emphasizes interaction among the three agents to mimic layers of human consciousness. Self-awareness is tasked with intentional reasoning, preconsciousness with social awareness, and unconsciousness with latent emotional expression.
Figure 2: Overview of the psychodynamic model, illustrating agent operations in a provided scenario.
Evaluation and Results
The psychodynamic model was evaluated using an "LLM as a Judge" approach, where external LLMs assess the fidelity, personalization, and clarity of reasoning across various scenarios. Notably, the fine-tuned LLM consistently outperformed the baseline model, demonstrating 71.2% preference in evaluations, attributed to enhanced emotional depth and personalized response generation.
Figure 3: Performance comparison showing the fine-tuned model's superiority across key areas.
The evaluation process involved the analysis of outputs under eight distinct conditions reflecting varying internal states, achieving improved accuracy and individualized responses when fine-tuning was applied to parameter-efficient setups.
Figure 4: Comparative performance with input reflecting diverse needs and states showcasing enhanced model accuracy.
Implications and Future Work
By simulating psychodynamic processes, the framework lays foundational groundwork for AI systems to exhibit layered and context-sensitive consciousness. Future developments could explore broader applicability across varied character profiles and more complex interaction dynamics within multi-agent environments. As an ethical consideration, transparency and informed user consent must accompany practical implementations of such models to mitigate anthropomorphism and potential biases.
Conclusion
This research offers significant insights into integrating psychoanalytic constructs within LLM architectures to simulate human-like consciousness. The proposed model demonstrates promising capabilities in personalized cognition, which can enhance interaction quality in artificial agents, paving the way for further exploration and refinement in simulating human-like cognitive processes in AI systems.