Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are Neighbors Enough? Multi-Head Neural n-gram can be Alternative to Self-attention (2207.13354v1)

Published 27 Jul 2022 in cs.CL

Abstract: Impressive performance of Transformer has been attributed to self-attention, where dependencies between entire input in a sequence are considered at every position. In this work, we reform the neural $n$-gram model, which focuses on only several surrounding representations of each position, with the multi-head mechanism as in Vaswani et al.(2017). Through experiments on sequence-to-sequence tasks, we show that replacing self-attention in Transformer with multi-head neural $n$-gram can achieve comparable or better performance than Transformer. From various analyses on our proposed method, we find that multi-head neural $n$-gram is complementary to self-attention, and their combinations can further improve performance of vanilla Transformer.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mengsay Loem (8 papers)
  2. Sho Takase (25 papers)
  3. Masahiro Kaneko (46 papers)
  4. Naoaki Okazaki (70 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.