Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SLM: Learning a Discourse Language Representation with Sentence Unshuffling (2010.16249v1)

Published 30 Oct 2020 in cs.CL and cs.LG

Abstract: We introduce Sentence-level LLMing, a new pre-training objective for learning a discourse language representation in a fully self-supervised manner. Recent pre-training methods in NLP focus on learning either bottom or top-level language representations: contextualized word representations derived from LLM objectives at one extreme and a whole sequence representation learned by order classification of two given textual segments at the other. However, these models are not directly encouraged to capture representations of intermediate-size structures that exist in natural languages such as sentences and the relationships among them. To that end, we propose a new approach to encourage learning of a contextualized sentence-level representation by shuffling the sequence of input sentences and training a hierarchical transformer model to reconstruct the original ordering. Through experiments on downstream tasks such as GLUE, SQuAD, and DiscoEval, we show that this feature of our model improves the performance of the original BERT by large margins.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Haejun Lee (9 papers)
  2. Drew A. Hudson (16 papers)
  3. Kangwook Lee (70 papers)
  4. Christopher D. Manning (169 papers)
Citations (46)