Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Recurrent Neural Network (1510.06168v1)

Published 21 Oct 2015 in cs.CL

Abstract: Bidirectional Long Short-Term Memory Recurrent Neural Network (BLSTM-RNN) has been shown to be very effective for tagging sequential data, e.g. speech utterances or handwritten documents. While word embedding has been demoed as a powerful representation for characterizing the statistical properties of natural language. In this study, we propose to use BLSTM-RNN with word embedding for part-of-speech (POS) tagging task. When tested on Penn Treebank WSJ test set, a state-of-the-art performance of 97.40 tagging accuracy is achieved. Without using morphological features, this approach can also achieve a good performance comparable with the Stanford POS tagger.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Peilu Wang (3 papers)
  2. Yao Qian (37 papers)
  3. Frank K. Soong (17 papers)
  4. Lei He (121 papers)
  5. Hai Zhao (227 papers)
Citations (107)

Summary

We haven't generated a summary for this paper yet.