Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models (2310.00746v3)

Published 1 Oct 2023 in cs.CL and cs.AI

Abstract: The advent of LLMs has paved the way for complex tasks such as role-playing, which enhances user interactions by enabling models to imitate various characters. However, the closed-source nature of state-of-the-art LLMs and their general-purpose training limit role-playing optimization. In this paper, we introduce RoleLLM, a framework to benchmark, elicit, and enhance role-playing abilities in LLMs. RoleLLM comprises four stages: (1) Role Profile Construction for 100 roles; (2) Context-Based Instruction Generation (Context-Instruct) for role-specific knowledge extraction; (3) Role Prompting using GPT (RoleGPT) for speaking style imitation; and (4) Role-Conditioned Instruction Tuning (RoCIT) for fine-tuning open-source models along with role customization. By Context-Instruct and RoleGPT, we create RoleBench, the first systematic and fine-grained character-level benchmark dataset for role-playing with 168,093 samples. Moreover, RoCIT on RoleBench yields RoleLLaMA (English) and RoleGLM (Chinese), significantly enhancing role-playing abilities and even achieving comparable results with RoleGPT (using GPT-4).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (17)
  1. Zekun Moore Wang (6 papers)
  2. Zhongyuan Peng (9 papers)
  3. Haoran Que (10 papers)
  4. Jiaheng Liu (100 papers)
  5. Wangchunshu Zhou (73 papers)
  6. Yuhan Wu (32 papers)
  7. Hongcheng Guo (39 papers)
  8. Ruitong Gan (2 papers)
  9. Zehao Ni (1 paper)
  10. Man Zhang (38 papers)
  11. Zhaoxiang Zhang (161 papers)
  12. Wanli Ouyang (358 papers)
  13. Ke Xu (309 papers)
  14. Jie Fu (229 papers)
  15. Junran Peng (30 papers)
  16. Jian Yang (503 papers)
  17. Stephen W. Huang (9 papers)
Citations (54)