Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Empirical Capacity Model for Self-Attention Neural Networks (2407.15425v2)

Published 22 Jul 2024 in cs.LG, cs.AI, cs.CL, and stat.ML

Abstract: Large pretrained self-attention neural networks, or transformers, have been very successful in various tasks recently. The performance of a model on a given task depends on its ability to memorize and generalize the training data. Large transformer models, which may have billions of parameters, in theory have a huge capacity to memorize content. However, the current algorithms for the optimization fall short of the theoretical capacity, and the capacity is also highly dependent on the content. In this paper, we focus on the memory capacity of these models obtained using common training algorithms and synthetic training data. Based on the results, we derive an empirical capacity model (ECM) for a generic transformer. The ECM can be used to design task-specific transformer models with an optimal number of parameters in cases where the target memorization capability of the task can be defined.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Aki Härmä (12 papers)
  2. Marcin Pietrasik (7 papers)
  3. Anna Wilbik (8 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets