Adapting to Teammates in a Cooperative Language Game (2403.00823v1)
Abstract: The game of Codenames has recently emerged as a domain of interest for intelligent agent design. The game is unique due to the way that language and coordination between teammates play important roles. Previous approaches to designing agents for this game have utilized a single internal LLM to determine action choices. This often leads to good performance with some teammates and inferior performance with other teammates, as the agent cannot adapt to any specific teammate. In this paper we present the first adaptive agent for playing Codenames. We adopt an ensemble approach with the goal of determining, during the course of interacting with a specific teammate, which of our internal expert agents, each potentially with its own LLM, is the best match. One difficulty faced in this approach is the lack of a single numerical metric that accurately captures the performance of a Codenames team. Prior Codenames research has utilized a handful of different metrics to evaluate agent teams. We propose a novel single metric to evaluate the performance of a Codenames team, whether playing a single team (solitaire) game, or a competitive game against another team. We then present and analyze an ensemble agent which selects an internal expert on each turn in order to maximize this proposed metric. Experimental analysis shows that this ensemble approach adapts to individual teammates and often performs nearly as well as the best internal expert with a teammate. Crucially, this success does not depend on any previous knowledge about the teammates, the ensemble agents, or their compatibility. This research represents an important step to making language-based agents for cooperative language settings like Codenames more adaptable to individual teammates.
- The nonstochastic multiarmed bandit problem. SIAM journal on computing, 32(1):48–77, 2002.
- Online implicit agent modelling. In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems, pages 255–262, 2013.
- Survey on applications of multi-armed and contextual bandits. In 2020 IEEE Congress on Evolutionary Computation (CEC), pages 1–8, 2020.
- Vlaada Chyátvil. Codenames, 2015.
- Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
- Arpad E Elo. The rating of chessplayers, past and present. Arco Pub., 1978.
- Word autobots: Using transformers for word association in the game codenames. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 16, pages 231–237, 2020.
- Cooperation and codenames: Understanding natural language processing via codenames. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 15, pages 160–166, 2019.
- Bandit based monte-carlo planning. In European conference on machine learning, pages 282–293. Springer, 2006.
- Playing codenames with language graphs and word embeddings. Journal of Artificial Intelligence Research, 71:319–346, 2021.
- Semantic memory search and retrieval in a novel cooperative word game: A comparison of associative and distributional semantic models. Cognitive Science, 45(10):e13053, 2021.
- Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22, 1985.
- Bandit Algorithms. Cambridge University Press, 2020.
- A portfolio approach to algorithm selection. In IJCAI, volume 3, pages 1542–1543, 2003.
- Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
- Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26, 2013.
- George A. Miller. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41, nov 1995.
- Babelnet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217–250, 2012.
- Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014.
- Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana, June 2018. Association for Computational Linguistics.
- Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
- Comparing models of associative meaning: An empirical investigation of reference in simple language games. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 292–301, 2018.
- Multiagent systems. Cambridge Books, 2009.
- Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the AAAI conference on artificial intelligence, volume 31, 2017.
- Competitive language games as creative tasks with well-defined goals. In Proceedings of the 13th International Conference on Computational Creativity (ICCC’22), 2022.
- Ad hoc autonomous agent teams: Collaboration without pre-coordination. In Twenty-Fourth AAAI Conference on Artificial Intelligence, 2010.
- Towards reframing codenames for computational modelling and creativity support using associative creativity principles. In Proceedings of the 12th Conference on Creativity & Cognition (C&C 2019), pages 407–413. 2019.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.