Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

JailbreakLens: Visual Analysis of Jailbreak Attacks Against Large Language Models (2404.08793v1)

Published 12 Apr 2024 in cs.CR, cs.CL, and cs.HC

Abstract: The proliferation of LLMs has underscored concerns regarding their security vulnerabilities, notably against jailbreak attacks, where adversaries design jailbreak prompts to circumvent safety mechanisms for potential misuse. Addressing these concerns necessitates a comprehensive analysis of jailbreak prompts to evaluate LLMs' defensive capabilities and identify potential weaknesses. However, the complexity of evaluating jailbreak performance and understanding prompt characteristics makes this analysis laborious. We collaborate with domain experts to characterize problems and propose an LLM-assisted framework to streamline the analysis process. It provides automatic jailbreak assessment to facilitate performance evaluation and support analysis of components and keywords in prompts. Based on the framework, we design JailbreakLens, a visual analysis system that enables users to explore the jailbreak performance against the target model, conduct multi-level analysis of prompt characteristics, and refine prompt instances to verify findings. Through a case study, technical evaluations, and expert interviews, we demonstrate our system's effectiveness in helping users evaluate model security and identify model weaknesses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yingchaojie Feng (11 papers)
  2. Zhizhang Chen (5 papers)
  3. Zhining Kang (1 paper)
  4. Sijia Wang (24 papers)
  5. Minfeng Zhu (25 papers)
  6. Wei Zhang (1489 papers)
  7. Wei Chen (1288 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com