Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Empirical Study for Vietnamese Constituency Parsing with Pre-training (2010.09623v2)

Published 19 Oct 2020 in cs.CL

Abstract: In this work, we use a span-based approach for Vietnamese constituency parsing. Our method follows the self-attention encoder architecture and a chart decoder using a CKY-style inference algorithm. We present analyses of the experiment results of the comparison of our empirical method using pre-training models XLM-Roberta and PhoBERT on both Vietnamese datasets VietTreebank and NIIVTB1. The results show that our model with XLM-Roberta archived the significantly F1-score better than other pre-training models, VietTreebank at 81.19% and NIIVTB1 at 85.70%.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tuan-Vi Tran (1 paper)
  2. Xuan-Thien Pham (1 paper)
  3. Duc-Vu Nguyen (18 papers)
  4. Kiet Van Nguyen (74 papers)
  5. Ngan Luu-Thuy Nguyen (56 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.