Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Segatron: Segment-Aware Transformer for Language Modeling and Understanding (2004.14996v2)

Published 30 Apr 2020 in cs.CL and cs.LG

Abstract: Transformers are powerful for sequence modeling. Nearly all state-of-the-art LLMs and pre-trained LLMs are based on the Transformer architecture. However, it distinguishes sequential tokens only with the token position index. We hypothesize that better contextual representations can be generated from the Transformer with richer positional information. To verify this, we propose a segment-aware Transformer (Segatron), by replacing the original token position encoding with a combined position encoding of paragraph, sentence, and token. We first introduce the segment-aware mechanism to Transformer-XL, which is a popular Transformer-based LLM with memory extension and relative position encoding. We find that our method can further improve the Transformer-XL base model and large model, achieving 17.1 perplexity on the WikiText-103 dataset. We further investigate the pre-training masked LLMing task with Segatron. Experimental results show that BERT pre-trained with Segatron (SegaBERT) can outperform BERT with vanilla Transformer on various NLP tasks, and outperforms RoBERTa on zero-shot sentence representation learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. He Bai (50 papers)
  2. Peng Shi (80 papers)
  3. Jimmy Lin (208 papers)
  4. Yuqing Xie (24 papers)
  5. Luchen Tan (8 papers)
  6. Kun Xiong (8 papers)
  7. Wen Gao (114 papers)
  8. Ming Li (787 papers)
Citations (8)