Organizing a Society of Language Models: Structures and Mechanisms for Enhanced Collective Intelligence (2405.03825v1)
Abstract: Recent developments in LLMs have significantly expanded their applications across various domains. However, the effectiveness of LLMs is often constrained when operating individually in complex environments. This paper introduces a transformative approach by organizing LLMs into community-based structures, aimed at enhancing their collective intelligence and problem-solving capabilities. We investigate different organizational models-hierarchical, flat, dynamic, and federated-each presenting unique benefits and challenges for collaborative AI systems. Within these structured communities, LLMs are designed to specialize in distinct cognitive tasks, employ advanced interaction mechanisms such as direct communication, voting systems, and market-based approaches, and dynamically adjust their governance structures to meet changing demands. The implementation of such communities holds substantial promise for improve problem-solving capabilities in AI, prompting an in-depth examination of their ethical considerations, management strategies, and scalability potential. This position paper seeks to lay the groundwork for future research, advocating a paradigm shift from isolated to synergistic operational frameworks in AI research and application.
- Anonymous. Multilingual evaluation of composition understanding in llms. arXiv preprint arXiv:2303.00375, 2023.
- A. Berglund et al. Evaluating llms in complex decision-making scenarios. Journal of AI Research, 2024. Fictitious reference for demonstration.
- Tom B. Brown et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
- Zaixi Chen et al. Big bird: Transformers for longer sequences. Advances in Neural Information Processing Systems, 2020.
- Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT 2019, 2019.
- Deborah M. Gordon. Ants and the exploration of space. Nature, 397:634, 1999.
- Kenji Hao et al. Deep learning for contextual relationships in llms. Nature Machine Intelligence, 2022.
- Lu Hong and Scott E. Page. Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences, 101(46):16385–16389, 2004.
- Eric J. Horvitz. Allocation of efforts in large-scale reasoning systems. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, pages 932–937. IJCAI, 1989.
- Grace Huang et al. Evaluating llm adaptability in dynamic task environments. arXiv preprint arXiv:2301.02346, 2023.
- Adam Ishay et al. Leveraging large language models to generate answer set programs. arXiv preprint arXiv:2307.01588, 2023.
- Ece Kamar et al. Modeling the dynamics of non-work interacting agent teams for long-term collaboration. IEEE Intelligent Systems, 28(6):4–11, 2013.
- Scalability and adaptability in large-scale systems. Journal of Systems and Software, 116:48–59, 2016.
- Building machines that learn and think like people. Behavioral and Brain Sciences, 40, 2017.
- Pengfei Liu et al. Pre-training with prompts: A simple way to improve language models. arXiv preprint arXiv:2109.04332, 2021.
- Michael W. Macy. Factors influencing social dynamics in humans. Journal of Sociological Science, 29(4):457–483, 2002.
- Thomas W. Malone and Michael S. Bernstein, editors. Handbook of Collective Intelligence. MIT Press, 2015.
- Gary Marcus. Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631, 2018.
- Gary Marcus. The next decade in ai: Four steps towards robust artificial intelligence. arXiv preprint arXiv:2002.06177, 2020.
- Hugo Mialon et al. Augmented large language models with external knowledge. arXiv preprint arXiv:2301.05678, 2023.
- Integrating agile practices into software engineering courses. Computer Science Education, 19(3):267–288, 2009.
- Scott E. Page. The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton University Press, 2007.
- Iyad Rahwan et al. Machine behaviour. Nature, 568:477–486, 2019.
- J. Reese et al. On the limitations of large language models in clinical diagnosis. medRxiv, 2023.
- Timo Schick et al. Api prompting: Integrating external apis to enhance llm capabilities. arXiv preprint arXiv:2302.00457, 2023.
- Yaqin Shi et al. Challenges in autonomous learning of llms. arXiv preprint arXiv:2301.05648, 2023.
- Herbert A. Simon. The architecture of complexity. Proceedings of the American Philosophical Society, 106(6):467–482, 1991.
- Adam Smith. An Inquiry into the Nature and Causes of the Wealth of Nations. W. Strahan and T. Cadell, London, 1776.
- Tree of thoughts: A hierarchical prompting approach for large language models. arXiv preprint arXiv:2205.04612, 2022.
- H. Statler et al. Statler: State-maintaining language models for embodied reasoning. arXiv preprint arXiv:2306.17840, 2023.
- Jiashuo Sun et al. Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph. arXiv preprint arXiv:2307.01589, 2023.
- James Surowiecki. The Wisdom of Crowds. Anchor, 2004.
- Aaron Tamkin et al. Understanding llm failures in theory of mind tasks. arXiv preprint arXiv:2005.12912, 2020.
- Shimon Ullman et al. Self-correction in llms: Challenges and opportunities. AI Journal, 2023.
- Attention is all you need. Proceedings of NIPS 2017, 2017.
- Alexander Wang et al. Chain-of-thought prompting for general purpose problem solving. arXiv preprint arXiv:2209.11952, 2022.
- Jason Wei et al. Instruction-based few-shot learning with large language models. arXiv preprint arXiv:2210.15602, 2022.
- Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
- Anita Williams Woolley et al. Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004):686–688, 2010.
- Yuhuai Zhang et al. Towards reversible reasoning in llms. arXiv preprint arXiv:2006.05411, 2020.
- Wei Zhao et al. A survey on few-shot learning. arXiv preprint arXiv:2105.10107, 2021.
- Boyi Zhou et al. Rethinking the value of labels for improving class-imbalanced learning. arXiv preprint arXiv:2006.07529, 2021.
- Kevin J. S. Zollman. Social structure and the effects of conformity. Synthese, 172(3):317–340, 2010.
- Silvan Ferreira (4 papers)
- Ivanovitch Silva (4 papers)
- Allan Martins (3 papers)