Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Models with Transformers (1904.09408v2)

Published 20 Apr 2019 in cs.CL, cs.AI, and cs.LG

Abstract: The Transformer architecture is superior to RNN-based models in computational efficiency. Recently, GPT and BERT demonstrate the efficacy of Transformer models on various NLP tasks using pre-trained LLMs on large-scale corpora. Surprisingly, these Transformer architectures are suboptimal for LLM itself. Neither self-attention nor the positional encoding in the Transformer is able to efficiently incorporate the word-level sequential context crucial to LLMing. In this paper, we explore effective Transformer architectures for LLM, including adding additional LSTM layers to better capture the sequential context while still keeping the computation efficient. We propose Coordinate Architecture Search (CAS) to find an effective architecture through iterative refinement of the model. Experimental results on the PTB, WikiText-2, and WikiText-103 show that CAS achieves perplexities between 20.42 and 34.11 on all problems, i.e. on average an improvement of 12.0 perplexity units compared to state-of-the-art LSTMs. The source code is publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Chenguang Wang (59 papers)
  2. Mu Li (95 papers)
  3. Alexander J. Smola (33 papers)
Citations (110)
Github Logo Streamline Icon: https://streamlinehq.com