Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Augmented Transformer Architecture for Natural Language Generation Tasks (1910.13634v1)

Published 30 Oct 2019 in cs.CL and cs.LG

Abstract: The Transformer based neural networks have been showing significant advantages on most evaluations of various natural language processing and other sequence-to-sequence tasks due to its inherent architecture based superiorities. Although the main architecture of the Transformer has been continuously being explored, little attention was paid to the positional encoding module. In this paper, we enhance the sinusoidal positional encoding algorithm by maximizing the variances between encoded consecutive positions to obtain additional promotion. Furthermore, we propose an augmented Transformer architecture encoded with additional linguistic knowledge, such as the Part-of-Speech (POS) tagging, to boost the performance on some natural language generation tasks, e.g., the automatic translation and summarization tasks. Experiments show that the proposed architecture attains constantly superior results compared to the vanilla Transformer.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hailiang Li (15 papers)
  2. Adele Y. C. Wang (1 paper)
  3. Yang Liu (2253 papers)
  4. Du Tang (2 papers)
  5. Zhibin Lei (5 papers)
  6. Wenye Li (18 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.