Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parsing linearizations appreciate PoS tags - but some are fussy about errors (2210.15219v1)

Published 27 Oct 2022 in cs.CL

Abstract: PoS tags, once taken for granted as a useful resource for syntactic parsing, have become more situational with the popularization of deep learning. Recent work on the impact of PoS tags on graph- and transition-based parsers suggests that they are only useful when tagging accuracy is prohibitively high, or in low-resource scenarios. However, such an analysis is lacking for the emerging sequence labeling parsing paradigm, where it is especially relevant as some models explicitly use PoS tags for encoding and decoding. We undertake a study and uncover some trends. Among them, PoS tags are generally more useful for sequence labeling parsers than for other paradigms, but the impact of their accuracy is highly encoding-dependent, with the PoS-based head-selection encoding being best only when both tagging accuracy and resource availability are high.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Alberto Muñoz-Ortiz (8 papers)
  2. Mark Anderson (24 papers)
  3. David Vilares (39 papers)
  4. Carlos Gómez-Rodríguez (87 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.