Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Which Examples to Annotate for In-Context Learning? Towards Effective and Efficient Selection (2310.20046v1)

Published 30 Oct 2023 in cs.CL

Abstract: LLMs can adapt to new tasks via in-context learning (ICL). ICL is efficient as it does not require any parameter updates to the trained LLM, but only few annotated examples as input for the LLM. In this work, we investigate an active learning approach for ICL, where there is a limited budget for annotating examples. We propose a model-adaptive optimization-free algorithm, termed AdaICL, which identifies examples that the model is uncertain about, and performs semantic diversity-based example selection. Diversity-based sampling improves overall effectiveness, while uncertainty sampling improves budget efficiency and helps the LLM learn new information. Moreover, AdaICL poses its sampling strategy as a Maximum Coverage problem, that dynamically adapts based on the model's feedback and can be approximately solved via greedy algorithms. Extensive experiments on nine datasets and seven LLMs show that AdaICL improves performance by 4.4% accuracy points over SOTA (7.7% relative improvement), is up to 3x more budget-efficient than performing annotations uniformly at random, while it outperforms SOTA with 2x fewer ICL examples.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Costas Mavromatis (11 papers)
  2. Balasubramaniam Srinivasan (12 papers)
  3. Zhengyuan Shen (7 papers)
  4. Jiani Zhang (21 papers)
  5. Huzefa Rangwala (57 papers)
  6. Christos Faloutsos (88 papers)
  7. George Karypis (110 papers)
Citations (15)