Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformer-GCRF: Recovering Chinese Dropped Pronouns with General Conditional Random Fields (2010.03224v1)

Published 7 Oct 2020 in cs.CL

Abstract: Pronouns are often dropped in Chinese conversations and recovering the dropped pronouns is important for NLP applications such as Machine Translation. Existing approaches usually formulate this as a sequence labeling task of predicting whether there is a dropped pronoun before each token and its type. Each utterance is considered to be a sequence and labeled independently. Although these approaches have shown promise, labeling each utterance independently ignores the dependencies between pronouns in neighboring utterances. Modeling these dependencies is critical to improving the performance of dropped pronoun recovery. In this paper, we present a novel framework that combines the strength of Transformer network with General Conditional Random Fields (GCRF) to model the dependencies between pronouns in neighboring utterances. Results on three Chinese conversation datasets show that the Transformer-GCRF model outperforms the state-of-the-art dropped pronoun recovery models. Exploratory analysis also demonstrates that the GCRF did help to capture the dependencies between pronouns in neighboring utterances, thus contributes to the performance improvements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jingxuan Yang (20 papers)
  2. Kerui Xu (3 papers)
  3. Jun Xu (398 papers)
  4. Si Li (89 papers)
  5. Sheng Gao (27 papers)
  6. Jun Guo (130 papers)
  7. Ji-Rong Wen (299 papers)
  8. Nianwen Xue (10 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.