Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules? (1808.09551v1)

Published 28 Aug 2018 in cs.CL, cs.AI, and cs.LG

Abstract: Character-level features are currently used in different neural network-based natural language processing algorithms. However, little is known about the character-level patterns those models learn. Moreover, models are often compared only quantitatively while a qualitative analysis is missing. In this paper, we investigate which character-level patterns neural networks learn and if those patterns coincide with manually-defined word segmentations and annotations. To that end, we extend the contextual decomposition technique (Murdoch et al. 2018) to convolutional neural networks which allows us to compare convolutional neural networks and bidirectional long short-term memory networks. We evaluate and compare these models for the task of morphological tagging on three morphologically different languages and show that these models implicitly discover understandable linguistic rules. Our implementation can be found at https://github.com/FredericGodin/ContextualDecomposition-NLP .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Fréderic Godin (23 papers)
  2. Kris Demuynck (20 papers)
  3. Joni Dambre (27 papers)
  4. Wesley De Neve (27 papers)
  5. Thomas Demeester (76 papers)
Citations (17)