Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Cross-Language Investigation into Jailbreak Attacks in Large Language Models (2401.16765v1)

Published 30 Jan 2024 in cs.CR and cs.AI

Abstract: LLMs have become increasingly popular for their advanced text generation capabilities across various domains. However, like any software, they face security challenges, including the risk of 'jailbreak' attacks that manipulate LLMs to produce prohibited content. A particularly underexplored area is the Multilingual Jailbreak attack, where malicious questions are translated into various languages to evade safety filters. Currently, there is a lack of comprehensive empirical studies addressing this specific threat. To address this research gap, we conducted an extensive empirical study on Multilingual Jailbreak attacks. We developed a novel semantic-preserving algorithm to create a multilingual jailbreak dataset and conducted an exhaustive evaluation on both widely-used open-source and commercial LLMs, including GPT-4 and LLaMa. Additionally, we performed interpretability analysis to uncover patterns in Multilingual Jailbreak attacks and implemented a fine-tuning mitigation method. Our findings reveal that our mitigation strategy significantly enhances model defense, reducing the attack success rate by 96.2%. This study provides valuable insights into understanding and mitigating Multilingual Jailbreak attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jie Li (553 papers)
  2. Yi Liu (543 papers)
  3. Chongyang Liu (10 papers)
  4. Ling Shi (119 papers)
  5. Xiaoning Ren (4 papers)
  6. Yaowen Zheng (9 papers)
  7. Yang Liu (2253 papers)
  8. Yinxing Xue (13 papers)
Citations (15)