Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking the Roles of Large Language Models in Chinese Grammatical Error Correction (2402.11420v2)

Published 18 Feb 2024 in cs.CL

Abstract: Recently, LLMs have been widely studied by researchers for their roles in various downstream NLP tasks. As a fundamental task in the NLP field, Chinese Grammatical Error Correction (CGEC) aims to correct all potential grammatical errors in the input sentences. Previous studies have shown that LLMs' performance as correctors on CGEC remains unsatisfactory due to its challenging task focus. To promote the CGEC field to better adapt to the era of LLMs, we rethink the roles of LLMs in the CGEC task so that they can be better utilized and explored in CGEC. Considering the rich grammatical knowledge stored in LLMs and their powerful semantic understanding capabilities, we utilize LLMs as explainers to provide explanation information for the CGEC small models during error correction to enhance performance. We also use LLMs as evaluators to bring more reasonable CGEC evaluations, thus alleviating the troubles caused by the subjectivity of the CGEC task. In particular, our work is also an active exploration of how LLMs and small models better collaborate in downstream tasks. Extensive experiments and detailed analyses on widely used datasets verify the effectiveness of our thinking intuition and the proposed methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yinghui Li (65 papers)
  2. Shang Qin (3 papers)
  3. Yangning Li (49 papers)
  4. Libo Qin (77 papers)
  5. Xuming Hu (120 papers)
  6. Wenhao Jiang (40 papers)
  7. Hai-Tao Zheng (94 papers)
  8. Philip S. Yu (592 papers)
  9. Haojing Huang (10 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets