Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigating the Learning Behaviour of In-context Learning: A Comparison with Supervised Learning (2307.15411v2)

Published 28 Jul 2023 in cs.CL

Abstract: LLMs have shown remarkable capacity for in-context learning (ICL), where learning a new task from just a few training examples is done without being explicitly pre-trained. However, despite the success of LLMs, there has been little understanding of how ICL learns the knowledge from the given prompts. In this paper, to make progress toward understanding the learning behaviour of ICL, we train the same LLMs with the same demonstration examples via ICL and supervised learning (SL), respectively, and investigate their performance under label perturbations (i.e., noisy labels and label imbalance) on a range of classification tasks. First, via extensive experiments, we find that gold labels have significant impacts on the downstream in-context performance, especially for LLMs; however, imbalanced labels matter little to ICL across all model sizes. Second, when comparing with SL, we show empirically that ICL is less sensitive to label perturbations than SL, and ICL gradually attains comparable performance to SL as the model size increases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xindi Wang (20 papers)
  2. Yufei Wang (141 papers)
  3. Can Xu (98 papers)
  4. Xiubo Geng (36 papers)
  5. Bowen Zhang (161 papers)
  6. Chongyang Tao (61 papers)
  7. Frank Rudzicz (90 papers)
  8. Robert E. Mercer (14 papers)
  9. Daxin Jiang (138 papers)
Citations (10)