Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Open Vocabulary Learning for Neural Chinese Pinyin IME (1811.04352v4)

Published 11 Nov 2018 in cs.AI and cs.CL

Abstract: Pinyin-to-character (P2C) conversion is the core component of pinyin-based Chinese input method engine (IME). However, the conversion is seriously compromised by the ambiguities of Chinese characters corresponding to pinyin as well as the predefined fixed vocabularies. To alleviate such inconveniences, we propose a neural P2C conversion model augmented by an online updated vocabulary with a sampling mechanism to support open vocabulary learning during IME working. Our experiments show that the proposed method outperforms commercial IMEs and state-of-the-art traditional models on standard corpus and true inputting history dataset in terms of multiple metrics and thus the online updated vocabulary indeed helps our IME effectively follows user inputting behavior.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Zhuosheng Zhang (125 papers)
  2. Yafang Huang (5 papers)
  3. Hai Zhao (227 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.