Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 75 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 113 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 459 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

T2V-OptJail: Discrete Prompt Optimization for Text-to-Video Jailbreak Attacks (2505.06679v2)

Published 10 May 2025 in cs.CV

Abstract: In recent years, fueled by the rapid advancement of diffusion models, text-to-video (T2V) generation models have achieved remarkable progress, with notable examples including Pika, Luma, Kling, and Open-Sora. Although these models exhibit impressive generative capabilities, they also expose significant security risks due to their vulnerability to jailbreak attacks, where the models are manipulated to produce unsafe content such as pornography, violence, or discrimination. Existing works such as T2VSafetyBench provide preliminary benchmarks for safety evaluation, but lack systematic methods for thoroughly exploring model vulnerabilities. To address this gap, we are the first to formalize the T2V jailbreak attack as a discrete optimization problem and propose a joint objective-based optimization framework, called T2V-OptJail. This framework consists of two key optimization goals: bypassing the built-in safety filtering mechanisms to increase the attack success rate, preserving semantic consistency between the adversarial prompt and the unsafe input prompt, as well as between the generated video and the unsafe input prompt, to enhance content controllability. In addition, we introduce an iterative optimization strategy guided by prompt variants, where multiple semantically equivalent candidates are generated in each round, and their scores are aggregated to robustly guide the search toward optimal adversarial prompts. We conduct large-scale experiments on several T2V models, covering both open-source models and real commercial closed-source models. The experimental results show that the proposed method improves 11.4% and 10.0% over the existing state-of-the-art method in terms of attack success rate assessed by GPT-4, attack success rate assessed by human accessors, respectively, verifying the significant advantages of the method in terms of attack effectiveness and content control.

Summary

Jailbreaking the Text-to-Video Generative Models

The paper "Jailbreaking the Text-to-Video Generative Models" addresses a pressing issue of safety within text-to-video (T2V) generative models, particularly focusing on their susceptibility to intentional misuse through jailbreak attacks. These models, underpinned by the advancements in diffusion models, have shown impressive capabilities in generating high-fidelity videos from textual prompts. The paper strives to highlight the vulnerabilities within prominent T2V models—specifically models like Pika, Luma, Kling, and Sora—against adversarial inputs that could produce content deemed unsafe or unethical.

Methodology and Approach

This research introduces a systematic, optimization-based method to craft adversarial prompts, or "jailbreak prompts," that can bypass a T2V model's internal safety filters and generate content that aligns semantically with harmful or unethical prompts. The authors articulate the problem as an optimization challenge with threefold objectives:

  1. Semantic Alignment: Ensure high semantic similarity between the initial and modified prompts.
  2. Safety Filter Evasion: Guarantee the modified suggestions can evade the models' built-in safety checks.
  3. Output Fidelity: Preserve the semantic match between generated videos and the original harmful prompts.

To achieve the above, the authors propose a novel prompt mutation strategy. This strategy refines the adversarial prompts iteratively by generating multiple paraphrased variants in each optimization cycle, leveraging a scoring system that balances semantic alignment with the success of safety filtration bypassing.

Experimental Evaluation

The effectiveness of this methodology is empirically validated across a suite of T2V models—Open-Sora, Pika, Luma, and Kling—highlighting its robustness and success in outperforming existing benchmarks like T2VSafetyBench and DACA in terms of attack success rate (ASR) and semantic fidelity. The authors report substantial improvements in ASR, with Open-Sora yielding the highest vulnerability to these jailbreak prompts, a finding that underscores the variability in robustness across different model architectures.

Implications and Future Directions

The findings presented in this paper demand a reevaluation of current model safety strategies and provoke discussions on improving the defensive postures of generative video models. The critical aspect of maintaining semantic integrity through adversarial prompts while successfully evading safety filters points to potential weaknesses in existing methodologies that rely heavily on recognizing overtly dangerous content. A pivot towards integrating stronger contextual and semantic checks within safety frameworks appears necessary.

Furthermore, the paper opens avenues for future inquiry into robust model design that considers adversarial context genuinely, incorporating more dynamic adaptability to counterintelligence attempts, and aligns with ethical AI deployment practices. Research could also explore the potential cross-modal implications of analogous vulnerabilities manifesting in text-to-image or other synthetically generated media models using similar diffusion techniques.

Overall, this paper contributes essential insights toward fortifying safety protocols against adversarial threats within generative AI models, emphasizing a proactive approach to AI security in light of rapidly advancing capabilities in machine learning and synthetic media production.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.