Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PERT: Pre-training BERT with Permuted Language Model (2203.06906v1)

Published 14 Mar 2022 in cs.CL

Abstract: Pre-trained LLMs (PLMs) have been widely used in various NLP tasks, owing to their powerful text representations trained on large-scale corpora. In this paper, we propose a new PLM called PERT for natural language understanding (NLU). PERT is an auto-encoding model (like BERT) trained with Permuted LLM (PerLM). The formulation of the proposed PerLM is straightforward. We permute a proportion of the input text, and the training objective is to predict the position of the original token. Moreover, we also apply whole word masking and N-gram masking to improve the performance of PERT. We carried out extensive experiments on both Chinese and English NLU benchmarks. The experimental results show that PERT can bring improvements over various comparable baselines on some of the tasks, while others are not. These results indicate that developing more diverse pre-training tasks is possible instead of masked LLM variants. Several quantitative studies are carried out to better understand PERT, which might help design PLMs in the future. Resources are available: https://github.com/ymcui/PERT

Analyzing PERT: Pre-training BERT with Permuted LLM

The paper proposes a novel pre-trained LLM (PLM) called PERT, which focuses on natural language understanding (NLU) by employing a Permuted LLM (PerLM) for training. The authors deviate from the conventional Masked LLM (MLM) task used in models like BERT, to explore pre-training tasks that involve predicting the original position of tokens in permuted text sequences. This methodology challenges the traditional MLM paradigm, aiming to enhance the diversity of pre-training tasks in PLMs.

Methodology

PERT, an auto-encoding model similar to BERT, introduces PerLM as its primary pre-training task. During training, a portion of the input text is permuted, and the model's objective is to infer the original token positions. This task is performed alongside techniques like whole word masking and N-gram masking to potentially boost model performance by emphasizing token grouping and continuity.

The authors conducted extensive experiments on both Chinese and English NLU tasks, covering machine reading comprehension (MRC), text classification (TC), and named entity recognition (NER). The results suggest that PERT shows notable improvements on certain tasks, particularly in MRC and NER, yet it does not uniformly outperform MLM-based models across all NLU tasks, notably lagging in text classification.

Results and Discussion

  • Machine Reading Comprehension: PERT exhibited improvements over baselines, particularly in the ability to handle permuted sequences effectively, suggesting enhanced contextual understanding.
  • Text Classification: PERT's performance was suboptimal compared to traditional MLM models, indicating that permutation-based pre-training introduces challenges that may hinder straightforward text categorization tasks.
  • Named Entity Recognition: The model showed consistent enhancements, likely benefitting from the emphasis on sequence structure inherent in PerLM.

The model’s contrasting performance across tasks implies that while permutation can enhance contextual inference, it may disrupt semantic interpretation vital to simpler sentence-level tasks like TC.

Implications and Future Directions

This exploration into PerLM introduces significant implications for the future of PLMs. The authors provide evidence that alternative pre-training tasks, which eschew traditional MLM strategies, may offer distinct advantages in certain contexts. However, the mixed results underscore the need for ongoing experimentation with task diversity and granularity in permuted approaches.

Future research could focus on refining permutation strategies, such as adjusting their granularity or incorporating hybrid models that balance permutation with token prediction, to address the specific limitations observed in text classification. Additionally, investigating the cognitive parallels between human reading of permuted text and model interpretation might offer novel insights for linguistic representation in AI.

Overall, by questioning the established paradigms of LLM pre-training, PERT fosters a dialogue on the necessity and potential of diverse pre-training tasks tailored to specific linguistic challenges.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yiming Cui (80 papers)
  2. Ziqing Yang (29 papers)
  3. Ting Liu (329 papers)
Citations (32)
Github Logo Streamline Icon: https://streamlinehq.com