Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Chinese Lexical Simplification (2010.07048v1)

Published 14 Oct 2020 in cs.CL

Abstract: Lexical simplification has attracted much attention in many languages, which is the process of replacing complex words in a given sentence with simpler alternatives of equivalent meaning. Although the richness of vocabulary in Chinese makes the text very difficult to read for children and non-native speakers, there is no research work for Chinese lexical simplification (CLS) task. To circumvent difficulties in acquiring annotations, we manually create the first benchmark dataset for CLS, which can be used for evaluating the lexical simplification systems automatically. In order to acquire more thorough comparison, we present five different types of methods as baselines to generate substitute candidates for the complex word that include synonym-based approach, word embedding-based approach, pretrained LLM-based approach, sememe-based approach, and a hybrid approach. Finally, we design the experimental evaluation of these baselines and discuss their advantages and disadvantages. To our best knowledge, this is the first study for CLS task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jipeng Qiang (22 papers)
  2. Xinyu Lu (15 papers)
  3. Yun Li (154 papers)
  4. Yunhao Yuan (18 papers)
  5. Yang Shi (107 papers)
  6. Xindong Wu (49 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.