Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CLOWER: A Pre-trained Language Model with Contrastive Learning over Word and Character Representations (2208.10844v2)

Published 23 Aug 2022 in cs.CL and cs.AI

Abstract: Pre-trained LLMs (PLMs) have achieved remarkable performance gains across numerous downstream tasks in natural language understanding. Various Chinese PLMs have been successively proposed for learning better Chinese language representation. However, most current models use Chinese characters as inputs and are not able to encode semantic information contained in Chinese words. While recent pre-trained models incorporate both words and characters simultaneously, they usually suffer from deficient semantic interactions and fail to capture the semantic relation between words and characters. To address the above issues, we propose a simple yet effective PLM CLOWER, which adopts the Contrastive Learning Over Word and charactER representations. In particular, CLOWER implicitly encodes the coarse-grained information (i.e., words) into the fine-grained representations (i.e., characters) through contrastive learning on multi-grained information. CLOWER is of great value in realistic scenarios since it can be easily incorporated into any existing fine-grained based PLMs without modifying the production pipelines.Extensive experiments conducted on a range of downstream tasks demonstrate the superior performance of CLOWER over several state-of-the-art baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Borun Chen (3 papers)
  2. Hongyin Tang (9 papers)
  3. Jiahao Bu (6 papers)
  4. Kai Zhang (542 papers)
  5. Jingang Wang (71 papers)
  6. Qifan Wang (129 papers)
  7. Hai-Tao Zheng (94 papers)
  8. Wei Wu (481 papers)
  9. Liqian Yu (1 paper)
Citations (1)