RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models (2310.00746v3)
Abstract: The advent of LLMs has paved the way for complex tasks such as role-playing, which enhances user interactions by enabling models to imitate various characters. However, the closed-source nature of state-of-the-art LLMs and their general-purpose training limit role-playing optimization. In this paper, we introduce RoleLLM, a framework to benchmark, elicit, and enhance role-playing abilities in LLMs. RoleLLM comprises four stages: (1) Role Profile Construction for 100 roles; (2) Context-Based Instruction Generation (Context-Instruct) for role-specific knowledge extraction; (3) Role Prompting using GPT (RoleGPT) for speaking style imitation; and (4) Role-Conditioned Instruction Tuning (RoCIT) for fine-tuning open-source models along with role customization. By Context-Instruct and RoleGPT, we create RoleBench, the first systematic and fine-grained character-level benchmark dataset for role-playing with 168,093 samples. Moreover, RoCIT on RoleBench yields RoleLLaMA (English) and RoleGLM (Chinese), significantly enhancing role-playing abilities and even achieving comparable results with RoleGPT (using GPT-4).
- Zekun Moore Wang (6 papers)
- Zhongyuan Peng (9 papers)
- Haoran Que (10 papers)
- Jiaheng Liu (100 papers)
- Wangchunshu Zhou (73 papers)
- Yuhan Wu (32 papers)
- Hongcheng Guo (39 papers)
- Ruitong Gan (2 papers)
- Zehao Ni (1 paper)
- Man Zhang (38 papers)
- Zhaoxiang Zhang (161 papers)
- Wanli Ouyang (358 papers)
- Ke Xu (309 papers)
- Jie Fu (229 papers)
- Junran Peng (30 papers)
- Jian Yang (503 papers)
- Stephen W. Huang (9 papers)