Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey (2111.01243v1)

Published 1 Nov 2021 in cs.CL, cs.AI, and cs.LG
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey

Abstract: Large, pre-trained transformer-based LLMs such as BERT have drastically changed the NLP field. We present a survey of recent work that uses these LLMs to solve NLP tasks via pre-training then fine-tuning, prompting, or text generation approaches. We also present approaches that use pre-trained LLMs to generate data for training augmentation or other purposes. We conclude with discussions on limitations and suggested directions for future research.

Understanding Recent Advances in NLP Through Pre-Trained LLMs

Introduction

The evolution of NLP has been significantly influenced by the development of large pre-trained transformer-based LLMs (PLMs), such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer). These models have become the cornerstone of modern NLP solutions, thanks to their ability to understand the nuances of language better than their predecessors. The key innovation is the two-step process: pre-training on a large corpus to learn language representations, followed by fine-tuning for specific tasks. This paper examines how researchers are leveraging PLMs across a multitude of NLP tasks.

Paradigm 1: Pre-Train then Fine-Tune

A fundamental paradigm in utilizing PLMs is the "pre-train then fine-tune" approach. Traditional statistical methods often relied on hand-crafted features, but PLMs allow the learning of latent representations from a generic large-scale corpus followed by targeted refinement for specific tasks. Fine-tuning adapts these models to specific NLP tasks while providing improved data efficiency and requiring relatively less task-specific data. This paradigm encompasses everything from fine-tuning the entire PLM, using adapters for efficiency, to even simplifying approaches that update only a small fraction of the model's weights.

Paradigm 2: Prompt-based Learning

Prompt-based learning represents another paradigm wherein a PLM is fed prompts - short phrases or contexts - to guide it in solving or reformulating a variety of NLP tasks. This method takes advantage of a model's pre-training on language prediction tasks by prompting it to "fill in the blank," making it easier for the model to leverage its pre-trained knowledge. Such an approach can include manually crafted prompts, automatic generation of prompts, and even using prompts as a basis for model explanation.

Paradigm 3: NLP as Text Generation

Consideration is also given to reframing NLP tasks as text generation problems, using the generative capabilities of models like GPT-2 and T5. This strategy implies reformatting tasks so the desired output, which includes information about labels or answers, is generated in response to an input sequence. The versatility and sophistication afforded by this method allow for high fidelity in tasks such as sequence labeling and question answering, often described as 'filling in templates.'

Generating Data with PLMs

Beyond direct application in NLP tasks, PLMs are also adept at generating synthetic labeled data. This capability is particularly useful for scenarios with limited labeled data. Data augmentation through PLMs can lead to improved model performance across domains such as information extraction and question answering. Additionally, PLMs can produce auxiliary data that provides insights into model behavior and explanations.

Conclusion

PLMs have ushered in a new era in NLP with their advanced text understanding and generative capabilities. Researchers have made significant progress in applying these sophisticated models to enhance traditional NLP tasks and innovate methods like prompt-based learning and generation of synthetic data. This surge in PLM applications points to a future of NLP that is as exciting as it is promising.

With ongoing research and development, we are moving towards more effective and efficient NLP solutions capable of tackling the complexities of human language, boosting the field's progress and extending its possibilities even further.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Bonan Min (20 papers)
  2. Hayley Ross (6 papers)
  3. Elior Sulem (7 papers)
  4. Amir Pouran Ben Veyseh (20 papers)
  5. Thien Huu Nguyen (61 papers)
  6. Oscar Sainz (14 papers)
  7. Eneko Agirre (53 papers)
  8. Ilana Heinz (1 paper)
  9. Dan Roth (222 papers)
Citations (825)