2000 character limit reached
Character Composition Model with Convolutional Neural Networks for Dependency Parsing on Morphologically Rich Languages (1705.10814v1)
Published 30 May 2017 in cs.CL
Abstract: We present a transition-based dependency parser that uses a convolutional neural network to compose word representations from characters. The character composition model shows great improvement over the word-lookup model, especially for parsing agglutinative languages. These improvements are even better than using pre-trained word embeddings from extra data. On the SPMRL data sets, our system outperforms the previous best greedy parser (Ballesteros et al., 2015) by a margin of 3% on average.