2000 character limit reached
Text Prompt Injection of Vision Language Models (2510.09849v1)
Published 10 Oct 2025 in cs.CL and cs.CV
Abstract: The widespread application of large vision LLMs has significantly raised safety concerns. In this project, we investigate text prompt injection, a simple yet effective method to mislead these models. We developed an algorithm for this type of attack and demonstrated its effectiveness and efficiency through experiments. Compared to other attack methods, our approach is particularly effective for large models without high demand for computational resources.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.