Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NICE: To Optimize In-Context Examples or Not? (2402.06733v3)

Published 9 Feb 2024 in cs.CL, cs.AI, and cs.LG

Abstract: Recent work shows that in-context learning and optimization of in-context examples (ICE) can significantly improve the accuracy of LLMs on a wide range of tasks, leading to an apparent consensus that ICE optimization is crucial for better performance. However, most of these studies assume a fixed or no instruction provided in the prompt. We challenge this consensus by investigating the necessity of optimizing ICE when task-specific instructions are provided and find that there are many tasks for which it yields diminishing returns. In particular, using a diverse set of tasks and a systematically created instruction set with gradually added details, we find that as the prompt instruction becomes more detailed, the returns on ICE optimization diminish. To characterize this behavior, we introduce a task-specific metric called Normalized Invariability to Choice of Examples (NICE) that quantifies the learnability of tasks from a given instruction, and provides a heuristic to help decide whether to optimize instructions or ICE for a new task. Given a task, the proposed metric can reliably predict the utility of optimizing ICE compared to using random ICE. Our code is available at https://github.com/microsoft/nice-icl.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Pragya Srivastava (13 papers)
  2. Satvik Golechha (9 papers)
  3. Amit Deshpande (35 papers)
  4. Amit Sharma (88 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets