Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Humanoid Agents: Platform for Simulating Human-like Generative Agents (2310.05418v1)

Published 9 Oct 2023 in cs.CL, cs.AI, and cs.HC
Humanoid Agents: Platform for Simulating Human-like Generative Agents

Abstract: Just as computational simulations of atoms, molecules and cells have shaped the way we study the sciences, true-to-life simulations of human-like agents can be valuable tools for studying human behavior. We propose Humanoid Agents, a system that guides Generative Agents to behave more like humans by introducing three elements of System 1 processing: Basic needs (e.g. hunger, health and energy), Emotion and Closeness in Relationships. Humanoid Agents are able to use these dynamic elements to adapt their daily activities and conversations with other agents, as supported with empirical experiments. Our system is designed to be extensible to various settings, three of which we demonstrate, as well as to other elements influencing human behavior (e.g. empathy, moral values and cultural background). Our platform also includes a Unity WebGL game interface for visualization and an interactive analytics dashboard to show agent statuses over time. Our platform is available on https://www.humanoidagents.com/ and code is on https://github.com/HumanoidAgents/HumanoidAgents

Overview of Humanoid Agents: Platform for Simulating Human-like Generative Agents

Humanoid Agents present a sophisticated platform designed to enhance the simulation of human-like generative agents by integrating key elements of System 1 processing: basic needs, emotions, and relationship closeness. This work builds on previous research, notably Generative Agents, by improving on their limitations through a focus on more realistic simulation of adaptive human behavior.

Core Contributions

The paper introduces three primary components of System 1 processing into generative agents, enabling them to operate in a manner more closely aligned with human behavior:

  1. Basic Needs: Incorporating fundamental human necessities such as hunger, health, and energy into agent behavior. The system allows agents to dynamically alter their activities to satisfy these needs based on their internal state feedback.
  2. Emotion: The inclusion of emotional factors enables agents to adjust behavior based on current feelings. Such adjustments are crucial for modeling natural human adaptability.
  3. Relationship Closeness: Developed to emulate the nuances in human interactions, agents adjust conversational strategies based on their relationship closeness with other agents.

Platform Implementation

The Humanoid Agents platform is extensible to various settings and incorporates a Unity WebGL interface for visualization and an interactive analytics dashboard. This setup provides a robust framework for simulating, displaying, and analyzing agent behavior over time.

Experimental Validation

Empirical experiments demonstrate that agents using this system effectively infer and respond to changes in System 1 attributes. Evaluations comparing system predictions to human annotations showed significant alignment, particularly in detecting emotions and social dynamics. However, there are complexities in accurately modeling certain aspects like fun and social needs, potentially warranting further refinement.

Implications and Future Directions

The Humanoid Agents platform, while advancing the fidelity of human-like simulations, has limitations. These include challenges with multiparty dialogues and synchronizing activities across different agents. Future iterations aim to enhance the granularity of individual differences in basic need decline rates and expand the conversational capabilities from dyadic to multiparty interactions.

The implications of this research are profound for fields that benefit from human behavior simulations, including computational social science and AI development. As AI continues to evolve, platforms like Humanoid Agents provide a foundational tool for exploring complex human-agent and agent-agent interactions within simulated environments.

Overall, Humanoid Agents mark a notable step in addressing the gap between simplified agent models and the rich, dynamic complexity of human behavior. In future developments, extensions to accommodate additional psychological and cultural factors could further increase the system’s utility and realism.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. Two sides of emotion: Exploring positivity and negativity in six basic emotions across cultures. Frontiers in Psychology, 8.
  2. Tensions in the parent and adult child relationship: Links to solidarity and ambivalence. Psychology and Aging, 24(2):287–295.
  3. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Linguistics.
  4. Harrison Chase. 2022. LangChain.
  5. R.I.M. Dunbar. 2009. The social brain hypothesis and its implications for social evolution. Annals of Human Biology, 36(5):562–572.
  6. Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion, 6(3-4):169–200.
  7. Barry G. Ginsberg. 1996. Together in group therapy: Fathers and their adolescent sons. In Men in groups: Insights, interventions, and psychoeducational work., pages 269–282. American Psychological Association.
  8. Michael L. Hecht. 1984. Satisfying communication and relationship labels: Intimacy and length of relationship as perceptual frames of naturalistic conversations. Western Journal of Speech Communication, 48(3):201–216.
  9. Can machines learn morality? the delphi experiment.
  10. Daniel Kahneman. 2011. Thinking, fast and slow. Farrar, Straus and Giroux, New York.
  11. Camel: Communicative agents for "mind" exploration of large scale language model society.
  12. Visually grounded reasoning across languages and cultures. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10467–10485, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  13. Training socially aligned language models in simulated human society.
  14. A. H. Maslow. 1943. A theory of human motivation. Psychological Review, 50:370–396.
  15. Rowland S. Miller. 2012. Intimate relationships. McGraw-Hill.
  16. Anton J. Nederhof. 1985. Methods of coping with social desirability bias: A review. European Journal of Social Psychology, 15(3):263–280.
  17. OpenAI. 2023. Gpt-4 technical report.
  18. OpenBMB. 2023. AgentVerse.
  19. Generative agents: Interactive simulacra of human behavior.
  20. Towards empathetic open-domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370–5381, Florence, Italy. Association for Computational Linguistics.
  21. Sam G.B. Roberts and Robin I.M. Dunbar. 2011. The costs of family and friends: an 18-month longitudinal study of relationship maintenance and decay. Evolution and Human Behavior, 32(3):186–197.
  22. Richard M. Ryan and Edward L. Deci. 2001. On happiness and human potentials: A review of research on hedonic and eudaimonic well-being. Annual Review of Psychology, 52(1):141–166.
  23. A computational approach to understanding empathy expressed in text-based mental health support. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5263–5276, Online. Association for Computational Linguistics.
  24. Gravitas Significant. 2023. Auto GPT.
  25. Learning to speak and act in a fantasy text adventure game. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 673–683, Hong Kong, China. Association for Computational Linguistics.
  26. Voyager: An open-ended embodied agent with large language models.
  27. Zhilin Wang and Pablo E. Torres. 2022. How to be helpful on online support forums? In Proceedings of the 4th Workshop of Narrative Understanding (WNU2022), pages 20–28, Seattle, United States. Association for Computational Linguistics.
  28. Extracting and inferring personal attributes from dialogue. In Proceedings of the 4th Workshop on NLP for Conversational AI, pages 58–69, Dublin, Ireland. Association for Computational Linguistics.
  29. Author2vec: A framework for generating user embedding.
  30. Nakajima Yohei. 2022. BabyAGI.
  31. Personalizing dialogue agents: I have a dog, do you have pets too?
  32. Discrete hierarchical organization of social group sizes. Proceedings of the Royal Society B: Biological Sciences, 272(1561):439–444.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Zhilin Wang (38 papers)
  2. Yu Ying Chiu (9 papers)
  3. Yu Cheung Chiu (1 paper)
Citations (41)
Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com