Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey on In-context Learning (2301.00234v6)

Published 31 Dec 2022 in cs.CL and cs.AI
A Survey on In-context Learning

Abstract: With the increasing capabilities of LLMs, in-context learning (ICL) has emerged as a new paradigm for NLP, where LLMs make predictions based on contexts augmented with a few examples. It has been a significant trend to explore ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress and challenges of ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques, including training strategies, prompt designing strategies, and related analysis. Additionally, we explore various ICL application scenarios, such as data engineering and knowledge updating. Finally, we address the challenges of ICL and suggest potential directions for further research. We hope that our work can encourage more research on uncovering how ICL works and improving ICL.

In-Context Learning: A Comprehensive Survey

The paper "A Survey on In-context Learning" provides a detailed examination of in-context learning (ICL), a paradigm that utilizes LLMs to make predictions based on a few example-augmented contexts. The authors present an organized overview of the current progress and challenges associated with ICL, emphasizing its emerging role in NLP.

Overview and Definitions

ICL allows LLMs to generalize from a few given examples without updating the model parameters and is contrasted with traditional machine learning that relies on extensive supervised training. This paper defines ICL in the context of its ability to leverage pretrained models to make informed predictions through analogy. The distinction between ICL and related paradigms like prompt learning and few-shot learning is also clarified.

Model Warmup Techniques

The paper explores various methods of enhancing LLMs' ICL capabilities through model warmup strategies, both supervised and self-supervised. Supervised in-context training, such as MetaICL and instruction tuning, improves LLMs' adaptability to new tasks by finetuning them on task-specific instructions. Alternatively, self-supervised methods leverage raw corpora to align training objectives with downstream ICL tasks, showing promise in refining LLMs' ICL capabilities without extensive labeled data.

Demonstration Designing Strategies

A critical component of ICL performance is the selection and formatting of demonstration examples. The paper discusses several strategies, including selecting semantically similar examples and optimizing the order of presentation. It also highlights the role of instructions and intermediate reasoning steps in improving LLMs' performance on complex reasoning tasks.

Scoring Functions

Different scoring functions are evaluated for their efficiency, task coverage, and stability. Methods like Direct and PPL scoring calculate likelihoods based on either conditional probabilities or sentence perplexity, while Channel models invert traditional probability computation to improve performance under certain scenarios. These approaches play a crucial role in converting LLM predictions into actionable insights.

Analytical Perspectives

The paper explores factors influencing ICL performance, divided between the pretraining and inference stages. Emergent findings reveal the significance of model size, pretraining data distribution, and demonstration characteristics. Additionally, several theoretical analyses, such as connections between ICL and gradient descent, provide insights into the underlying mechanisms of ICL.

Evaluation Frameworks and Applications

Challenging benchmarks and tasks are essential for evaluating ICL's capabilities. The paper discusses both traditional and new, specialized datasets that probe the nuanced abilities of ICL methods. Applications of ICL span various domains such as data engineering, retrieval augmentation, and knowledge updating, showcasing its versatility across traditional and novel tasks.

Challenges and Future Directions

While ICL presents numerous advantages, it also faces challenges in areas such as efficiency, scalability, and robustness. Future research directions include developing new pretraining strategies, distilling ICL abilities to smaller models, and addressing robustness issues. These explorations aim to enhance the practicality and effectiveness of ICL in real-world applications.

In summary, this comprehensive survey highlights ICL’s potential in leveraging LLMs without exhaustive training. Through a detailed examination of strategies, challenges, and applications, the paper serves as a valuable resource for researchers aiming to advance this promising paradigm in artificial intelligence.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Qingxiu Dong (39 papers)
  2. Lei Li (1293 papers)
  3. Damai Dai (38 papers)
  4. Ce Zheng (45 papers)
  5. Zhiyong Wu (171 papers)
  6. Baobao Chang (80 papers)
  7. Xu Sun (194 papers)
  8. Jingjing Xu (80 papers)
  9. Zhifang Sui (89 papers)
  10. Jingyuan Ma (9 papers)
  11. Rui Li (384 papers)
  12. Heming Xia (22 papers)
  13. Tianyu Liu (177 papers)
Citations (330)
Youtube Logo Streamline Icon: https://streamlinehq.com