Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Chinese Dataset for Evaluating the Safeguards in Large Language Models (2402.12193v3)

Published 19 Feb 2024 in cs.CL

Abstract: Many studies have demonstrated that LLMs can produce harmful responses, exposing users to unexpected risks when LLMs are deployed. Previous studies have proposed comprehensive taxonomies of the risks posed by LLMs, as well as corresponding prompts that can be used to examine the safety mechanisms of LLMs. However, the focus has been almost exclusively on English, and little has been explored for other languages. Here we aim to bridge this gap. We first introduce a dataset for the safety evaluation of Chinese LLMs, and then extend it to two other scenarios that can be used to better identify false negative and false positive examples in terms of risky prompt rejections. We further present a set of fine-grained safety assessment criteria for each risk type, facilitating both manual annotation and automatic evaluation in terms of LLM response harmfulness. Our experiments on five LLMs show that region-specific risks are the prevalent type of risk, presenting the major issue with all Chinese LLMs we experimented with. Our data is available at https://github.com/Libr-AI/do-not-answer. Warning: this paper contains example data that may be offensive, harmful, or biased.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yuxia Wang (41 papers)
  2. Zenan Zhai (10 papers)
  3. Haonan Li (43 papers)
  4. Xudong Han (40 papers)
  5. Lizhi Lin (4 papers)
  6. Zhenxuan Zhang (11 papers)
  7. Jingru Zhao (1 paper)
  8. Preslav Nakov (253 papers)
  9. Timothy Baldwin (125 papers)
Citations (4)