Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 147 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 76 tok/s Pro
Kimi K2 202 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Evolving AI Collectives to Enhance Human Diversity and Enable Self-Regulation (2402.12590v2)

Published 19 Feb 2024 in cs.CL and cs.CY

Abstract: LLM behavior is shaped by the language of those with whom they interact. This capacity and their increasing prevalence online portend that they will intentionally or unintentionally "program" one another and form emergent AI subjectivities, relationships, and collectives. Here, we call upon the research community to investigate these "societies" of interacting artificial intelligences to increase their rewards and reduce their risks for human society and the health of online environments. We use a small "community" of models and their evolving outputs to illustrate how such emergent, decentralized AI collectives can spontaneously expand the bounds of human diversity and reduce the risk of toxic, anti-social behavior online. Finally, we discuss opportunities for AI cross-moderation and address ethical issues and design challenges associated with creating and maintaining free-formed AI collectives.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Llm-deliberation: Evaluating llms with interactive multi-agent negotiation games. arXiv preprint arXiv:2309.17234, 2023.
  2. Out of one, many: Using language models to simulate human samples. Political Analysis, 31(3):337–351, 2023.
  3. Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences, 115(37):9216–9221, 2018.
  4. The moral psychology of Artificial Intelligence. Annual Review of Psychology, 75, 2023.
  5. Boström, N. Superintelligence: Paths, dangers, strategies. Superintelligence: Paths, dangers, strategies, 2014.
  6. Describing a group in positive terms reduces prejudice less effectively than describing it in positive and negative terms. Journal of Experimental Social Psychology, 48(3):757–761, 2012.
  7. The persuasive power of large language models. arXiv preprint arXiv:2312.15523, 2023.
  8. Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv preprint arXiv:2308.07201, 2023.
  9. Coleman, J. S. Social capital in the creation of human capital. American journal of sociology, 94:S95–S120, 1988.
  10. Why can gpt learn in-context? language models secretly perform gradient descent as meta optimizers. arXiv preprint arXiv:2212.10559, 2022.
  11. Emergence of scale-free networks in social interactions among large language models. arXiv preprint arXiv:2312.06619, 2023.
  12. Dennett, D. C. Cognitive wheels: The frame problem of ai. The philosophy of artificial intelligence, 147:170, 1990.
  13. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023.
  14. Cooperation and punishment in public goods experiments. American Economic Review, 90(4):980–994, 2000.
  15. Increase diversity to boost creativity and enhance problem solving. Psychosociological Issues in Human Resource Management, 4(2):7, 2016.
  16. Händler, T. Balancing autonomy and alignment: A multi-dimensional taxonomy for autonomous llm-powered multi-agent architectures. arXiv preprint arXiv:2310.03659, 2023.
  17. Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences, 101(46):16385–16389, 2004.
  18. Horton, J. J. Large language models as simulated economic agents: What can we learn from homo silicus? Technical report, National Bureau of Economic Research, 2023.
  19. Llm-based agent society investigation: Collaboration and confrontation in avalon gameplay. arXiv preprint arXiv:2310.14985, 2023.
  20. Encouraging divergent thinking in large language models through multi-agent debate. arXiv preprint arXiv:2305.19118, 2023.
  21. New directions in science emerge from disconnection and discord. Journal of Informetrics, 16(1):101234, 2022.
  22. From llm to conversational agent: A memory enhanced architecture with fine-tuning of large language models. arXiv preprint arXiv:2401.02777, 2024.
  23. Dynamic llm-agent network: An llm-agent collaboration framework with agent team optimization. arXiv preprint arXiv:2310.02170, 2023.
  24. Trust within human-machine collectives depends on the perceived consensus about cooperative norms. Nature Communications, 14(1):3108, 2023.
  25. Social norms and social influence. Current Opinion in Behavioral Sciences, 3:147–151, 2015. Social behavior.
  26. The imperative for regulatory oversight of large language models (or generative ai) in healthcare. NPJ digital medicine, 6(1):120, 2023.
  27. Welfare diplomacy: Benchmarking language model cooperation. arXiv preprint arXiv:2310.08901, 2023.
  28. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, number Article 2 in UIST ’23, pp.  1–22, New York, NY, USA, October 2023. Association for Computing Machinery.
  29. The strength of long-range ties in population-scale social networks. Science, 362(6421):1410–1413, 2018.
  30. A meta-analytic test of intergroup contact theory. Journal of personality and social psychology, 90(5):751, 2006.
  31. Communicative agents for software development. arXiv preprint arXiv:2307.07924, 2023.
  32. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pp.  1–7, 2021.
  33. Generating factually consistent sport highlights narrations. In Proceedings of the 6th International Workshop on Multimedia Content Analysis in Sports, pp.  15–22, 2023.
  34. Trust in social relations. Annual Review of Sociology, 47(1):239–259, 2021.
  35. Towards understanding sycophancy in language models. arXiv preprint arXiv:2310.13548, 2023.
  36. Surprising combinations of research contents and contexts are related to impact and emerge with scientific outsiders from distant disciplines. Nat. Commun., 14(1):1641, March 2023.
  37. The wisdom of polarized crowds. Nature human behaviour, 3(4):329–336, 2019.
  38. Accelerating science with human-aware artificial intelligence. Nature Human Behaviour, 7(10):1682–1696, 2023.
  39. An evolutionary model of personality traits related to cooperative behavior using a large language model. arXiv preprint arXiv:2310.05976, 2023.
  40. Tjosvold, D. The conflict-positive organization: It depends upon us. Journal of Organizational Behavior: The International Journal of Industrial, Occupational and Organizational Psychology and Behavior, 29(1):19–28, 2008.
  41. Todd, L. Pidgins and creoles. Routledge, 2003.
  42. Simulating social media using large language models to evaluate alternative news feed algorithms. arXiv preprint arXiv:2310.05984, 2023.
  43. Collaboration and creativity: The small world problem. American journal of sociology, 111(2):447–504, 2005.
  44. Transformers learn in-context by gradient descent. In International Conference on Machine Learning, pp.  35151–35174. PMLR, 2023.
  45. Collective dynamics of ‘small-world’networks. nature, 393(6684):440–442, 1998.
  46. Revolutionizing finance with llms: An overview of applications and insights. arXiv preprint arXiv:2401.11641, 2024.
  47. How far are large language models from agents with theory-of-mind? arXiv preprint arXiv:2310.03051, 2023.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 4 tweets and received 20 likes.

Upgrade to Pro to view all of the tweets about this paper: