Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Tensorized Transformer for Language Modeling (1906.09777v3)

Published 24 Jun 2019 in cs.CL and cs.LG

Abstract: Latest development of neural models has connected the encoder and decoder through a self-attention mechanism. In particular, Transformer, which is solely based on self-attention, has led to breakthroughs in NLP tasks. However, the multi-head attention mechanism, as a key component of Transformer, limits the effective deployment of the model to a resource-limited setting. In this paper, based on the ideas of tensor decomposition and parameters sharing, we propose a novel self-attention model (namely Multi-linear attention) with Block-Term Tensor Decomposition (BTD). We test and verify the proposed attention method on three LLMing tasks (i.e., PTB, WikiText-103 and One-billion) and a neural machine translation task (i.e., WMT-2016 English-German). Multi-linear attention can not only largely compress the model parameters but also obtain performance improvements, compared with a number of LLMing approaches, such as Transformer, Transformer-XL, and Transformer with tensor train decomposition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xindian Ma (6 papers)
  2. Peng Zhang (641 papers)
  3. Shuai Zhang (319 papers)
  4. Nan Duan (172 papers)
  5. Yuexian Hou (23 papers)
  6. Dawei Song (62 papers)
  7. Ming Zhou (182 papers)
Citations (157)