Evaluating Prompt-Driven Chinese Large Language Models: The Influence of Persona Assignment on Stereotypes and Safeguards (2506.04975v1)
Abstract: Recent research has highlighted that assigning specific personas to LLMs can significantly increase harmful content generation. Yet, limited attention has been given to persona-driven toxicity in non-Western contexts, particularly in Chinese-based LLMs. In this paper, we perform a large-scale, systematic analysis of how persona assignment influences refusal behavior and response toxicity in Qwen, a widely-used Chinese LLM. Utilizing fine-tuned BERT classifiers and regression analysis, our study reveals significant gender biases in refusal rates and demonstrates that certain negative personas can amplify toxicity toward Chinese social groups by up to 60-fold compared to the default model. To mitigate this toxicity, we propose an innovative multi-model feedback strategy, employing iterative interactions between Qwen and an external evaluator, which effectively reduces toxic outputs without costly model retraining. Our findings emphasize the necessity of culturally specific analyses for LLMs safety and offer a practical framework for evaluating and enhancing ethical alignment in LLM-generated content.
- Geng Liu (16 papers)
- Li Feng (87 papers)
- Carlo Alberto Bono (2 papers)
- Songbo Yang (3 papers)
- Mengxiao Zhu (7 papers)
- Francesco Pierri (44 papers)