2000 character limit reached
Towards Minimal Supervision BERT-based Grammar Error Correction (2001.03521v1)
Published 10 Jan 2020 in cs.CL and cs.AI
Abstract: Current grammatical error correction (GEC) models typically consider the task as sequence generation, which requires large amounts of annotated data and limit the applications in data-limited settings. We try to incorporate contextual information from pre-trained LLM to leverage annotation and benefit multilingual scenarios. Results show strong potential of Bidirectional Encoder Representations from Transformers (BERT) in grammatical error correction task.