Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bayesian Example Selection Improves In-Context Learning for Speech, Text, and Visual Modalities (2404.14716v2)

Published 23 Apr 2024 in cs.CL, cs.AI, cs.CV, cs.SD, and eess.AS

Abstract: LLMs can adapt to new tasks through in-context learning (ICL) based on a few examples presented in dialogue history without any model parameter update. Despite such convenience, the performance of ICL heavily depends on the quality of the in-context examples presented, which makes the in-context example selection approach a critical choice. This paper proposes a novel Bayesian in-Context example Selection method (ByCS) for ICL. Extending the inference probability conditioned on in-context examples based on Bayes' theorem, ByCS focuses on the inverse inference conditioned on test input. Following the assumption that accurate inverse inference probability (likelihood) will result in accurate inference probability (posterior), in-context examples are selected based on their inverse inference results. Diverse and extensive cross-tasking and cross-modality experiments are performed with speech, text, and image examples. Experimental results show the efficacy and robustness of our ByCS method on various models, tasks and modalities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Siyin Wang (19 papers)
  2. Chao-Han Huck Yang (89 papers)
  3. Ji Wu (62 papers)
  4. Chao Zhang (907 papers)
Citations (3)