Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pretraining Chinese BERT for Detecting Word Insertion and Deletion Errors (2204.12052v1)

Published 26 Apr 2022 in cs.CL

Abstract: Chinese BERT models achieve remarkable progress in dealing with grammatical errors of word substitution. However, they fail to handle word insertion and deletion because BERT assumes the existence of a word at each position. To address this, we present a simple and effective Chinese pretrained model. The basic idea is to enable the model to determine whether a word exists at a particular position. We achieve this by introducing a special token \texttt{[null]}, the prediction of which stands for the non-existence of a word. In the training stage, we design pretraining tasks such that the model learns to predict \texttt{[null]} and real words jointly given the surrounding context. In the inference stage, the model readily detects whether a word should be inserted or deleted with the standard masked LLMing function. We further create an evaluation dataset to foster research on word insertion and deletion. It includes human-annotated corrections for 7,726 erroneous sentences. Results show that existing Chinese BERT performs poorly on detecting insertion and deletion errors. Our approach significantly improves the F1 scores from 24.1\% to 78.1\% for word insertion and from 26.5\% to 68.5\% for word deletion, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Cong Zhou (39 papers)
  2. Yong Dai (33 papers)
  3. Duyu Tang (65 papers)
  4. Enbo Zhao (8 papers)
  5. Zhangyin Feng (14 papers)
  6. Li Kuang (8 papers)
  7. Shuming Shi (126 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.