Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When does In-context Learning Fall Short and Why? A Study on Specification-Heavy Tasks (2311.08993v1)

Published 15 Nov 2023 in cs.CL and cs.AI

Abstract: In-context learning (ICL) has become the default method for using LLMs, making the exploration of its limitations and understanding the underlying causes crucial. In this paper, we find that ICL falls short of handling specification-heavy tasks, which are tasks with complicated and extensive task specifications, requiring several hours for ordinary humans to master, such as traditional information extraction tasks. The performance of ICL on these tasks mostly cannot reach half of the state-of-the-art results. To explore the reasons behind this failure, we conduct comprehensive experiments on 18 specification-heavy tasks with various LLMs and identify three primary reasons: inability to specifically understand context, misalignment in task schema comprehension with humans, and inadequate long-text understanding ability. Furthermore, we demonstrate that through fine-tuning, LLMs can achieve decent performance on these tasks, indicating that the failure of ICL is not an inherent flaw of LLMs, but rather a drawback of existing alignment methods that renders LLMs incapable of handling complicated specification-heavy tasks via ICL. To substantiate this, we perform dedicated instruction tuning on LLMs for these tasks and observe a notable improvement. We hope the analyses in this paper could facilitate advancements in alignment methods enabling LLMs to meet more sophisticated human demands.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Hao Peng (291 papers)
  2. Xiaozhi Wang (51 papers)
  3. Jianhui Chen (23 papers)
  4. Weikai Li (16 papers)
  5. Yunjia Qi (10 papers)
  6. Zimu Wang (15 papers)
  7. Zhili Wu (3 papers)
  8. Kaisheng Zeng (17 papers)
  9. Bin Xu (192 papers)
  10. Lei Hou (127 papers)
  11. Juanzi Li (144 papers)
Citations (22)