Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Jailbreaking LLMs with Arabic Transliteration and Arabizi (2406.18725v2)

Published 26 Jun 2024 in cs.LG and cs.CL

Abstract: This study identifies the potential vulnerabilities of LLMs to 'jailbreak' attacks, specifically focusing on the Arabic language and its various forms. While most research has concentrated on English-based prompt manipulation, our investigation broadens the scope to investigate the Arabic language. We initially tested the AdvBench benchmark in Standardized Arabic, finding that even with prompt manipulation techniques like prefix injection, it was insufficient to provoke LLMs into generating unsafe content. However, when using Arabic transliteration and chatspeak (or arabizi), we found that unsafe content could be produced on platforms like OpenAI GPT-4 and Anthropic Claude 3 Sonnet. Our findings suggest that using Arabic and its various forms could expose information that might remain hidden, potentially increasing the risk of jailbreak attacks. We hypothesize that this exposure could be due to the model's learned connection to specific words, highlighting the need for more comprehensive safety training across all language forms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mansour Al Ghanim (3 papers)
  2. Saleh Almohaimeed (6 papers)
  3. Mengxin Zheng (17 papers)
  4. Yan Solihin (15 papers)
  5. Qian Lou (40 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com