Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GRAD-SUM: Leveraging Gradient Summarization for Optimal Prompt Engineering (2407.12865v1)

Published 12 Jul 2024 in cs.CL and cs.AI

Abstract: Prompt engineering for LLMs is often a manual time-intensive process that involves generating, evaluating, and refining prompts iteratively to ensure high-quality outputs. While there has been work on automating prompt engineering, the solutions generally are either tuned to specific tasks with given answers or are quite costly. We introduce GRAD-SUM, a scalable and flexible method for automatic prompt engineering that builds on gradient-based optimization techniques. Our approach incorporates user-defined task descriptions and evaluation criteria, and features a novel gradient summarization module to generalize feedback effectively. Our results demonstrate that GRAD-SUM consistently outperforms existing methods across various benchmarks, highlighting its versatility and effectiveness in automatic prompt optimization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Derek Austin (2 papers)
  2. Elliott Chartock (1 paper)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com