Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Constant Memory Attention Block (2306.12599v1)

Published 21 Jun 2023 in cs.LG

Abstract: Modern foundation model architectures rely on attention mechanisms to effectively capture context. However, these methods require linear or quadratic memory in terms of the number of inputs/datapoints, limiting their applicability in low-compute domains. In this work, we propose Constant Memory Attention Block (CMAB), a novel general-purpose attention block that computes its output in constant memory and performs updates in constant computation. Highlighting CMABs efficacy, we introduce methods for Neural Processes and Temporal Point Processes. Empirically, we show our proposed methods achieve results competitive with state-of-the-art while being significantly more memory efficient.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Meta temporal point processes. In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=QZfdDpTX1uM.
  2. Emnist: Extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN), pp.  2921–2926. IEEE, 2017.
  3. Latent bottlenecked attentive neural processes. In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=yIxtevizEA.
  4. Conditional neural processes. In International Conference on Machine Learning, pp. 1704–1713. PMLR, 2018a.
  5. Neural processes. arXiv preprint arXiv:1807.01622, 2018b.
  6. Coordination among neural modules through a shared global workspace. In International Conference on Learning Representations, 2021.
  7. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  603–612, 2019.
  8. Perceiver: General perception with iterative attention. In International conference on machine learning, pp. 4651–4664. PMLR, 2021.
  9. Transformers in vision: A survey. ACM computing surveys (CSUR), 54(10s):1–41, 2022.
  10. Attentive neural processes. 2019.
  11. Set transformer: A framework for attention-based permutation-invariant neural networks. In International Conference on Machine Learning, pp. 3744–3753. PMLR, 2019.
  12. Bootstrapping neural processes. Advances in neural information processing systems, 33:6606–6615, 2020.
  13. A survey of transformers. AI Open, 2022.
  14. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
  15. Transformer neural processes: Uncertainty-aware meta learning via sequence modeling. In International Conference on Machine Learning, pp. 16569–16594. PMLR, 2022.
  16. Intensity-free learning of temporal point processes. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HygOjhEYDH.
  17. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  18. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020.
  19. Transformer hawkes process. In International conference on machine learning, pp. 11692–11702. PMLR, 2020.

Summary

We haven't generated a summary for this paper yet.