Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MetaICL: Learning to Learn In Context (2110.15943v2)

Published 29 Oct 2021 in cs.CL and cs.AI

Abstract: We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-shot learning where a pretrained LLM is tuned to do in-context learning on a large set of training tasks. This meta-training enables the model to more effectively learn a new task in context at test time, by simply conditioning on a few training examples with no parameter updates or task-specific templates. We experiment on a large, diverse collection of tasks consisting of 142 NLP datasets including classification, question answering, natural language inference, paraphrase detection and more, across seven different meta-training/target splits. MetaICL outperforms a range of baselines including in-context learning without meta-training and multi-task learning followed by zero-shot transfer. We find that the gains are particularly significant for target tasks that have domain shifts from the meta-training tasks, and that using a diverse set of the meta-training tasks is key to improvements. We also show that MetaICL approaches (and sometimes beats) the performance of models fully finetuned on the target task, and outperforms much bigger models with nearly 8x parameters. Finally, we show that MetaICL is complementary to human-written instructions, and the best performance can be achieved by combining both approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sewon Min (45 papers)
  2. Mike Lewis (78 papers)
  3. Luke Zettlemoyer (225 papers)
  4. Hannaneh Hajishirzi (176 papers)
Citations (418)

Summary

An Analysis of MetaICL: Learning to Learn In Context

The paper "MetaICL: Learning to Learn In Context" presents a novel meta-training framework aiming to enhance few-shot learning capabilities of pretrained LLMs through in-context conditioning. Authored by researchers from the University of Washington, Meta AI, and Allen Institute for AI, the paper unveils MetaICL, a model that streamlines in-context learning by conditioning on limited training examples without necessitating any parameter updates or task-specific templates.

Overview and Experimental Setup

The core contribution of the paper is the introduction of MetaICL, a method that empowers LLMs to effectively generalize to new tasks in diverse domains using meta-training. The research is comprehensively evaluated on an extensive collection of 142 NLP datasets, covering varied tasks like classification, question answering, natural language inference, and paraphrase detection. Importantly, the experiments assess MetaICL across seven different task splits, providing a robust evaluation of its performance.

The authors contrast MetaICL against several baseline methods, including in-context learning without meta-training and zero-shot transfer following multi-task learning. Through these comparisons, the paper demonstrates the substantial performance gains MetaICL offers, particularly highlighting its efficacy on tasks with domain shifts from the training tasks.

Key Results and Contributions

MetaICL showcases remarkable results, notably outperforming larger models with up to eight times the parameters. Significantly, its ability to sometimes exceed the performance of models fully finetuned on target tasks is evidence of its robustness and adaptability. Another notable finding is the model's complementary nature to human-written instructions, implying that a hybrid approach leveraging both MetaICL and human instructions can further optimize task performance.

Key numerical results underscore MetaICL's effectiveness. The model shows substantial improvements in unseen domain tasks and consistently outperforms raw LLMs even when task labels lack semantic hints, showcasing its strong in-context learning capabilities.

Implications and Future Work

The implications of MetaICL's approach are vast, presenting both practical and theoretical advancements in the field of AI and language processing. Practically, MetaICL reduces the demand for extensive finetuning across new tasks, enabling more accessible and agile deployment of LLMs in real-world applications where labeled data is scarce or rapidly changing.

On a theoretical level, the framework invites further exploration into the scaffolding that facilitates in-context learning and transfer across heterogenous tasks. It provides a foundation for future developments in meta-learning and few-shot learning paradigms, fostering continued experimentation with diverse and dynamic task sets.

The potential next steps implicate improvements in model architecture and training regimes that might minimize reliance on specific task labels or domain assumptions, moving towards more generalized learning frameworks. Additionally, addressing issues such as bias in pretrained models and exploring potential applications in privacy-preserving settings constitute promising areas for continued research.

In conclusion, MetaICL's innovative approach advances our understanding of learning in context and paves the way for sophisticated methods in few-shot learning scenarios, marking a critical step forward in the development of versatile, high-performing LLMs.

Youtube Logo Streamline Icon: https://streamlinehq.com