Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pit One Against Many: Leveraging Attention-head Embeddings for Parameter-efficient Multi-head Attention (2310.07911v1)

Published 11 Oct 2023 in cs.CL

Abstract: Scaling pre-trained LLMs has resulted in large performance gains in various natural language processing tasks but comes with a large cost in memory requirements. Inspired by the position embeddings in transformers, we aim to simplify and reduce the memory footprint of the multi-head attention (MHA) mechanism. We propose an alternative module that uses only a single shared projection matrix and multiple head embeddings (MHE), i.e. one per head. We empirically demonstrate that our MHE attention is substantially more memory efficient compared to alternative attention mechanisms while achieving high predictive performance retention ratio to vanilla MHA on several downstream tasks. MHE attention only requires a negligible fraction of additional parameters ($3nd$, where $n$ is the number of attention heads and $d$ the size of the head embeddings) compared to a single-head attention, while MHA requires $(3n2-3n)d2-3nd$ additional parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Huiyin Xue (3 papers)
  2. Nikolaos Aletras (72 papers)

Summary

We haven't generated a summary for this paper yet.