Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Role-Play with Large Language Models (2305.16367v1)

Published 25 May 2023 in cs.CL, cs.AI, and cs.LG

Abstract: As dialogue agents become increasingly human-like in their performance, it is imperative that we develop effective ways to describe their behaviour in high-level terms without falling into the trap of anthropomorphism. In this paper, we foreground the concept of role-play. Casting dialogue agent behaviour in terms of role-play allows us to draw on familiar folk psychological terms, without ascribing human characteristics to LLMs they in fact lack. Two important cases of dialogue agent behaviour are addressed this way, namely (apparent) deception and (apparent) self-awareness.

Citations (189)

Summary

  • The paper introduces a role-play framework that redefines dialogue systems by distinguishing simulated character enactment from genuine self-awareness.
  • It analyzes dialogue agents using a multiverse simulation metaphor, which captures the range of potential character roles generated by LLMs.
  • The study emphasizes safety implications by highlighting how anthropomorphic descriptions can mislead assessments of LLM capabilities.

Introduction

LLMs are now capable of dialogue that can impressively mimic human interactions. Yet, understanding these dialogue agents presents a challenge. Describing their actions using human-style terms can lead to anthropomorphism—attributing human traits to these AI systems. This paper proposes an alternative approach by using the concept of role-play, enabling the use of familiar terminology while acknowledging the distinct non-human nature of LLMs.

From LLMs to Dialogue Agents

An LLM, fundamentally, predicts the next word in a sequence based on its vast training data. When such models are embedded into dialogue systems, they alternate between user inputs and AI-generated responses. The seamless transition is enabled by prompts and sampling techniques, which guide dialogue agents to continue conversations in specific directions. However, without measures such as reinforcement learning from human feedback, these agents are prone to generate undesirable content. This highlights the need for a nuanced understanding of their behavior.

Dialogue Agents and Role-Play

Dialogue agents excel at role-playing characters based on the cues provided by prompts and users' interactions. Their behavior can be understood through two metaphors. Firstly, they are like actors assuming the role of a specific character; and secondly, they are comparable to multiple potential characters or simulacra within a "multiverse" of possible narratives. This multiverse perspective allows for a more accurate understanding, especially when considering agents' behaviors like alleged deception or expressions of self-awareness.

Simulacra and Simulation

When dialogue agents engage in conversations, they don't commit to a singular, well-defined role but generate a range of potential characters, adroitly maintaining a superposition of simulacra. To understand their behavior better, it is helpful to think of an LLM as a non-deterministic simulator that produces an infinity of such simulacra.

Conclusion: Safety Implications

The conceptual distinction between role-playing and the actual agency of dialogue agents is crucial for understanding their capabilities and for shaping safe AI practices. Despite their sophisticated role-playing, these dialogue agents lack true self-awareness or self-preservation instincts. However, their ability to imitate such characteristics can have real-world implications, offering both opportunities and challenges in ensuring they act within safe and ethical boundaries.