Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning How To Ask: Cycle-Consistency Refines Prompts in Multimodal Foundation Models (2402.08756v1)

Published 13 Feb 2024 in cs.CL and cs.CV

Abstract: When LLMs perform zero-shot inference, they typically use a prompt with a task specification, and generate a completion. However, there is no work to explore the possibility of the reverse - going from completion to task specification. In this paper, we employ both directions to perform cycle-supervised learning entirely in-context. Our goal is to create a forward map f : X -> Y (e.g. image -> generated caption), coupled with a backward map g : Y -> X (e.g. caption -> generated image) to construct a cycle-consistency "loss" (formulated as an update to the prompt) to enforce g(f(X)) ~= X. The technique, called CyclePrompt, uses cycle-consistency as a free supervisory signal to iteratively craft the prompt. Importantly, CyclePrompt reinforces model performance without expensive fine-tuning, without training data, and without the complexity of external environments (e.g. compilers, APIs). We demonstrate CyclePrompt in two domains: code generation and image captioning. Our results on the HumanEval coding benchmark put us in first place on the leaderboard among models that do not rely on extra training data or usage of external environments, and third overall. Compared to the GPT4 baseline, we improve accuracy from 80.5% to 87.2%. In the vision-language space, we generate detailed image captions which outperform baseline zero-shot GPT4V captions, when tested against natural (VQAv2) and diagrammatic (FigureQA) visual question-answering benchmarks. To the best of our knowledge, this is the first use of self-supervised learning for prompting.

Refining Prompts through Cycle-Consistency in Multimodal Foundation Models

Introduction

Prompt engineering is a critical aspect of working with LLMs, allowing for a range of applications from code generation to multimodal tasks like image captioning. Traditional approaches often rely on altering prompts to improve model performance, a necessity given the expensive nature of fine-tuning LLMs. This paper introduces an innovative method, named CyclePrompt, that iteratively refines prompts using a cycle-consistency mechanism, drawing on the concept of translating between modalities (e.g., text to code and image to text) and then back, to refine the initial prompt. Notably, this method achieves enhanced performance without the need for additional training data or complex external environments.

CyclePrompt Mechanism

The CyclePrompt methodology relies on establishing a forward and backward mapping between two data types (X and Y), such as code and text or images and captions. It formulates a cycle-consistency "loss" to refine prompts based on the discrepancy between the original input and the final output of these mappings. This approach resembles the autoencoder architecture but instead applies it for prompt refinement, employing the cycle as a supervisory signal. Significantly, CyclePrompt enhances model performance by refining the input prompt based on the cycle's outcome, thereby improving the quality of the model’s outputs in a self-supervised manner.

Experiments and Results

Code Generation

CyclePrompt was applied to the HumanEval benchmark for code generation, where it impressively reached first place among models not utilizing extra training data or external tools, with an accuracy improvement from 80.5% to 87.2% over the GPT4 baseline. This performance underlines the model's ability to refine its prompts for more precise code generation through iterative cycles.

Image Captioning

In the domain of image captioning, CyclePrompt was tested against natural and diagrammatic visual question-answering benchmarks. It achieved better performance in generating image captions compared to baseline zero-shot GPT4V captions. The cycle-consistency mechanism allowed the model to generate refined captions that led to improved question-answering abilities based on those captions alone.

Implications and Future Directions

The success of CyclePrompt in enhancing model performance through prompt refinement sans additional training data or complex external tools opens new avenues for research and application in the field of AI, particularly for code generation and image captioning tasks. It demonstrates the potential for cycle-based learning and refinement to improve the utility of LLMs across various domains, hinting at broader applications in enhancing model performance for other complex tasks.

The paper also lays groundwork for future explorations into the dynamics of forward and backward functions in cycle-consistency, the characterization of modality misalignment, and the potential for semantic gradient descent as a method for refining model understanding and performance iteratively. These areas present rich opportunities for advancing our understanding and leveraging of LLMs.

Conclusion

The introduction of CyclePrompt marks a significant step forward in the field of LLMs and generative AI, showcasing the power of cycle-consistency for prompt refinement without the need for additional resources. This method not only elevates the performance of LLMs in completing specific tasks but also broadens our understanding of effective interaction with these models, suggesting a promising direction for future research and development in AI.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Reflexion: Language agents with verbal reinforcement learning. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  2. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017.
  3. Learning correspondence from the cycle-consistency of time. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2566–2576, 2019.
  4. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
  5. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022.
  6. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023.
  7. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992, 2023.
  8. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023.
  9. Automatic prompt optimization with" gradient descent" and beam search. arXiv preprint arXiv:2305.03495, 2023.
  10. Language agent tree search unifies reasoning acting and planning in language models. arXiv preprint arXiv:2310.04406, 2023.
  11. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
  12. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  13. Figureqa: An annotated figure dataset for visual reasoning. arXiv preprint arXiv:1710.07300, 2017.
  14. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
  15. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  16. OpenAI. Gpt-4 technical report, 2023.
  17. Harnessing the power of llms in practice: A survey on chatgpt and beyond. arXiv preprint arXiv:2304.13712, 2023.
  18. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  19. Textbooks are all you need. arXiv preprint arXiv:2306.11644, 2023.
  20. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
  21. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023.
  22. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022.
  23. Challenges and applications of large language models. arXiv preprint arXiv:2307.10169, 2023.
  24. It’s not just size that matters: Small language models are also few-shot learners. arXiv preprint arXiv:2009.07118, 2020.
  25. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199–22213, 2022.
  26. Iterative domain-repaired back-translation. arXiv preprint arXiv:2010.02473, 2020.
  27. Improving generalization of image captioning with unsupervised prompt learning. arXiv preprint arXiv:2308.02862, 2023.
  28. Prompt-aligned gradient for prompt tuning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15659–15669, 2023.
  29. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16816–16825, 2022.
  30. A systematic survey of prompt engineering on vision-language foundation models. arXiv preprint arXiv:2307.12980, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Maurice Diesendruck (7 papers)
  2. Jianzhe Lin (15 papers)
  3. Shima Imani (7 papers)
  4. Gayathri Mahalingam (4 papers)
  5. Mingyang Xu (8 papers)
  6. Jie Zhao (214 papers)
Citations (1)