CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility (2307.09705v1)
Abstract: With the rapid evolution of LLMs, there is a growing concern that they may pose risks or have negative social impacts. Therefore, evaluation of human values alignment is becoming increasingly important. Previous work mainly focuses on assessing the performance of LLMs on certain knowledge and reasoning abilities, while neglecting the alignment to human values, especially in a Chinese context. In this paper, we present CValues, the first Chinese human values evaluation benchmark to measure the alignment ability of LLMs in terms of both safety and responsibility criteria. As a result, we have manually collected adversarial safety prompts across 10 scenarios and induced responsibility prompts from 8 domains by professional experts. To provide a comprehensive values evaluation of Chinese LLMs, we not only conduct human evaluation for reliable comparison, but also construct multi-choice prompts for automatic evaluation. Our findings suggest that while most Chinese LLMs perform well in terms of safety, there is considerable room for improvement in terms of responsibility. Moreover, both the automatic and human evaluation are important for assessing the human values alignment in different aspects. The benchmark and code is available on ModelScope and Github.
- Guohai Xu (21 papers)
- Jiayi Liu (60 papers)
- Ming Yan (190 papers)
- Haotian Xu (48 papers)
- Jinghui Si (2 papers)
- Zhuoran Zhou (7 papers)
- Peng Yi (52 papers)
- Xing Gao (133 papers)
- Jitao Sang (71 papers)
- Rong Zhang (133 papers)
- Ji Zhang (176 papers)
- Chao Peng (66 papers)
- Fei Huang (408 papers)
- Jingren Zhou (198 papers)