Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NEZHA: Neural Contextualized Representation for Chinese Language Understanding (1909.00204v3)

Published 31 Aug 2019 in cs.CL

Abstract: The pre-trained LLMs have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora. In this technical report, we present our practice of pre-training LLMs named NEZHA (NEural contextualiZed representation for CHinese lAnguage understanding) on Chinese corpora and finetuning for the Chinese NLU tasks. The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional Relative Positional Encoding as an effective positional encoding scheme, Whole Word Masking strategy, Mixed Precision Training and the LAMB Optimizer in training the models. The experimental results show that NEZHA achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including named entity recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment classification (ChnSenti) and natural language inference (XNLI).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Junqiu Wei (4 papers)
  2. Xiaozhe Ren (21 papers)
  3. Xiaoguang Li (71 papers)
  4. Wenyong Huang (12 papers)
  5. Yi Liao (87 papers)
  6. Yasheng Wang (91 papers)
  7. Jiashu Lin (1 paper)
  8. Xin Jiang (242 papers)
  9. Xiao Chen (277 papers)
  10. Qun Liu (230 papers)
Citations (113)