Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EXCGEC: A Benchmark of Edit-wise Explainable Chinese Grammatical Error Correction (2407.00924v1)

Published 1 Jul 2024 in cs.CL

Abstract: Existing studies explore the explainability of Grammatical Error Correction (GEC) in a limited scenario, where they ignore the interaction between corrections and explanations. To bridge the gap, this paper introduces the task of EXplainable GEC (EXGEC), which focuses on the integral role of both correction and explanation tasks. To facilitate the task, we propose EXCGEC, a tailored benchmark for Chinese EXGEC consisting of 8,216 explanation-augmented samples featuring the design of hybrid edit-wise explanations. We benchmark several series of LLMs in multiple settings, covering post-explaining and pre-explaining. To promote the development of the task, we introduce a comprehensive suite of automatic metrics and conduct human evaluation experiments to demonstrate the human consistency of the automatic metrics for free-text explanations. All the codes and data will be released after the review.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Jingheng Ye (15 papers)
  2. Shang Qin (3 papers)
  3. Yinghui Li (65 papers)
  4. Xuxin Cheng (42 papers)
  5. Libo Qin (77 papers)
  6. Hai-Tao Zheng (94 papers)
  7. Peng Xing (17 papers)
  8. Zishan Xu (8 papers)
  9. Guo Cheng (2 papers)
  10. Zhao Wei (13 papers)