Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

In-context Example Selection with Influences (2302.11042v2)

Published 21 Feb 2023 in cs.CL and cs.LG

Abstract: In-context learning (ICL) is a powerful paradigm emerged from LLMs. Despite its promises, ICL performance is known to be highly sensitive to input examples. In this work, we use $\textit{in-context influences}$ to analyze few-shot ICL performance directly from the in-context examples. Our proposed influence-based example selection method can identify both positive and negative examples, outperforming several baselines when evaluated on 9 SuperGLUE tasks. Our analysis uncovers up to a $16.3\%$ performance gap between using the most negative in-context examples compared to the most positive. In a case study, we apply our influence-based framework to quantify the phenomena of recency bias in example ordering for few-shot ICL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Tai Nguyen (10 papers)
  2. Eric Wong (47 papers)
Citations (40)
Github Logo Streamline Icon: https://streamlinehq.com