Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Chinese Word Segmentation with Lexicon and Unlabeled Data via Posterior Regularization (1905.01963v1)

Published 26 Apr 2019 in cs.CL, cs.LG, and stat.ML

Abstract: Existing methods for CWS usually rely on a large number of labeled sentences to train word segmentation models, which are expensive and time-consuming to annotate. Luckily, the unlabeled data is usually easy to collect and many high-quality Chinese lexicons are off-the-shelf, both of which can provide useful information for CWS. In this paper, we propose a neural approach for Chinese word segmentation which can exploit both lexicon and unlabeled data. Our approach is based on a variant of posterior regularization algorithm, and the unlabeled data and lexicon are incorporated into model training as indirect supervision by regularizing the prediction space of CWS models. Extensive experiments on multiple benchmark datasets in both in-domain and cross-domain scenarios validate the effectiveness of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Junxin Liu (3 papers)
  2. Fangzhao Wu (81 papers)
  3. Chuhan Wu (87 papers)
  4. Yongfeng Huang (110 papers)
  5. Xing Xie (220 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.