Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models (2105.14813v2)

Published 31 May 2021 in cs.CL

Abstract: A sequence-to-sequence learning with neural networks has empirically proven to be an effective framework for Chinese Spelling Correction (CSC), which takes a sentence with some spelling errors as input and outputs the corrected one. However, CSC models may fail to correct spelling errors covered by the confusion sets, and also will encounter unseen ones. We propose a method, which continually identifies the weak spots of a model to generate more valuable training instances, and apply a task-specific pre-training strategy to enhance the model. The generated adversarial examples are gradually added to the training set. Experimental results show that such an adversarial training method combined with the pretraining strategy can improve both the generalization and robustness of multiple CSC models across three different datasets, achieving stateof-the-art performance for CSC task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chong Li (112 papers)
  2. Cenyuan Zhang (10 papers)
  3. Xiaoqing Zheng (44 papers)
  4. Xuanjing Huang (288 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.