Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models (2307.12980v1)

Published 24 Jul 2023 in cs.CV
A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models

Abstract: Prompt engineering is a technique that involves augmenting a large pre-trained model with task-specific hints, known as prompts, to adapt the model to new tasks. Prompts can be created manually as natural language instructions or generated automatically as either natural language instructions or vector representations. Prompt engineering enables the ability to perform predictions based solely on prompts without updating model parameters, and the easier application of large pre-trained models in real-world tasks. In past years, Prompt engineering has been well-studied in natural language processing. Recently, it has also been intensively studied in vision-LLMing. However, there is currently a lack of a systematic overview of prompt engineering on pre-trained vision-LLMs. This paper aims to provide a comprehensive survey of cutting-edge research in prompt engineering on three types of vision-LLMs: multimodal-to-text generation models (e.g. Flamingo), image-text matching models (e.g. CLIP), and text-to-image generation models (e.g. Stable Diffusion). For each type of model, a brief model summary, prompting methods, prompting-based applications, and the corresponding responsibility and integrity issues are summarized and discussed. Furthermore, the commonalities and differences between prompting on vision-LLMs, LLMs, and vision models are also discussed. The challenges, future directions, and research opportunities are summarized to foster future research on this topic.

Comprehensive Survey on Prompt Engineering in Vision-LLMs

Introduction to Prompt Engineering in Vision-LLMs

Prompt engineering has evolved as an innovative technique to adapt pre-trained vision-LLMs (VLMs) to new tasks without the necessity for extensive retraining or fine-tuning. This entails the augmentation of model inputs with task-specific hints, enabling models to understand and perform tasks with minimal labeled data. This paradigm shift has led to substantial efficiency gains, particularly in leveraging pre-trained models for domain-specific applications.

Taxonomy of Prompting Methods

Prompting methods in VLMs can be broadly classified into hard and soft prompts. Hard prompts consist of discrete, interpretable text tokens that guide the model, while soft prompts involve continuous vectors tuned to optimize performance on specific tasks. This classification offers a framework for understanding the diverse strategies employed in prompting VLMs, facilitating a structured analysis of existing methodologies.

Prompting Multimodal-to-Text Generation Models

Multimodal-to-text generation models synthesize textual descriptions from multimodal inputs. The integration of visual and linguistic information requires sophisticated prompting strategies to generate coherent and contextually relevant outputs. We explore preliminary models, prompt tuning strategies, and their applications in tasks such as visual question answering and image captioning. The role of both hard and soft prompts in enhancing model performance across these varied tasks is also examined.

Prompting Image-Text Matching Models

Image-text matching models aim to establish semantic relationships between images and text. We delve into different approaches to prompt these models, including patch-wise, annotation prompts, and unified prompting strategies that encompass both textual and visual information. The utility of prompting in improving task accuracy and model adaptability to novel scenarios is highlighted, with insights into future research directions.

Prompting Text-to-Image Generation Models

Text-to-image generation models represent a cutting-edge area where prompts direct the synthesis of images from textual descriptions. This section outlines the advances in prompt engineering for such models, emphasizing complex control over the generation process through semantic prompt design, diversified generation, and controllable synthesis. The expansion of prompting techniques to generate videos, 3D models, and perform complex tasks further underscores the potential of prompt engineering in creative and practical applications.

Challenges and Future Directions

The survey identifies several challenges in the current landscape of prompt engineering for VLMs, including the need for better understanding the mechanisms behind in-context learning and instruction tuning, and exploring efficient strategies for visual prompting. The potential for universal prompts and ethical considerations in prompting VLMs also present areas for future exploration.

Conclusion

Prompt engineering has revolutionized the application of pre-trained VLMs, enabling task-specific adaptations with unprecedented efficiency. By systematically categorizing prompting methods and examining their applications across different model types, this survey provides a foundational understanding and highlights the potential for innovation in prompt engineering within vision-language research. As the field continues to evolve, focusing on novel prompting strategies, ethical AI considerations, and cross-model applicability will be crucial in realizing the full potential of vision-LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Jindong Gu (101 papers)
  2. Zhen Han (54 papers)
  3. Shuo Chen (127 papers)
  4. Ahmad Beirami (86 papers)
  5. Bailan He (12 papers)
  6. Gengyuan Zhang (13 papers)
  7. Ruotong Liao (8 papers)
  8. Yao Qin (41 papers)
  9. Volker Tresp (158 papers)
  10. Philip Torr (172 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com