Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Did I Do Wrong? Quantifying LLMs' Sensitivity and Consistency to Prompt Engineering (2406.12334v2)

Published 18 Jun 2024 in cs.LG and cs.SE

Abstract: LLMs changed the way we design and interact with software systems. Their ability to process and extract information from text has drastically improved productivity in a number of routine tasks. Developers that want to include these models in their software stack, however, face a dreadful challenge: debugging LLMs' inconsistent behavior across minor variations of the prompt. We therefore introduce two metrics for classification tasks, namely sensitivity and consistency, which are complementary to task performance. First, sensitivity measures changes of predictions across rephrasings of the prompt, and does not require access to ground truth labels. Instead, consistency measures how predictions vary across rephrasings for elements of the same class. We perform an empirical comparison of these metrics on text classification tasks, using them as guideline for understanding failure modes of the LLM. Our hope is that sensitivity and consistency will be helpful to guide prompt engineering and obtain LLMs that balance robustness with performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Federico Errica (21 papers)
  2. Giuseppe Siracusano (21 papers)
  3. Davide Sanvito (12 papers)
  4. Roberto Bifulco (15 papers)
Citations (8)
Youtube Logo Streamline Icon: https://streamlinehq.com