Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Relaxed Attention for Transformer Models (2209.09735v1)

Published 20 Sep 2022 in cs.LG, cs.CL, eess.AS, and eess.IV

Abstract: The powerful modeling capabilities of all-attention-based transformer architectures often cause overfitting and - for natural language processing tasks - lead to an implicitly learned internal LLM in the autoregressive transformer decoder complicating the integration of external LLMs. In this paper, we explore relaxed attention, a simple and easy-to-implement smoothing of the attention weights, yielding a two-fold improvement to the general transformer architecture: First, relaxed attention provides regularization when applied to the self-attention layers in the encoder. Second, we show that it naturally supports the integration of an external LLM as it suppresses the implicitly learned internal LLM by relaxing the cross attention in the decoder. We demonstrate the benefit of relaxed attention across several tasks with clear improvement in combination with recent benchmark approaches. Specifically, we exceed the former state-of-the-art performance of 26.90% word error rate on the largest public lip-reading LRS3 benchmark with a word error rate of 26.31%, as well as we achieve a top-performing BLEU score of 37.67 on the IWSLT14 (DE$\rightarrow$EN) machine translation task without external LLMs and virtually no additional model parameters. Code and models will be made publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Timo Lohrenz (3 papers)
  2. Björn Möller (2 papers)
  3. Zhengyang Li (35 papers)
  4. Tim Fingscheidt (56 papers)
Citations (10)