Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Role of Pre-trained Language Models in Word Ordering: A Case Study with BART (2204.07367v2)

Published 15 Apr 2022 in cs.CL

Abstract: Word ordering is a constrained language generation task taking unordered words as input. Existing work uses linear models and neural networks for the task, yet pre-trained LLMs have not been studied in word ordering, let alone why they help. We use BART as an instance and show its effectiveness in the task. To explain why BART helps word ordering, we extend analysis with probing and empirically identify that syntactic dependency knowledge in BART is a reliable explanation. We also report performance gains with BART in the related partial tree linearization task, which readily extends our analysis.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Zebin Ou (4 papers)
  2. Meishan Zhang (70 papers)
  3. Yue Zhang (620 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.