Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling (1812.10860v5)

Published 28 Dec 2018 in cs.CL

Abstract: Natural language understanding has recently seen a surge of progress with the use of sentence encoders like ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) which are pretrained on variants of LLMing. We conduct the first large-scale systematic study of candidate pretraining tasks, comparing 19 different tasks both as alternatives and complements to LLMing. Our primary results support the use LLMing, especially when combined with pretraining on additional labeled-data tasks. However, our results are mixed across pretraining tasks and show some concerning trends: In ELMo's pretrain-then-freeze paradigm, random baselines are worryingly strong and results vary strikingly across target tasks. In addition, fine-tuning BERT on an intermediate task often negatively impacts downstream transfer. In a more positive trend, we see modest gains from multitask training, suggesting the development of more sophisticated multitask and transfer learning techniques as an avenue for further research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Alex Wang (32 papers)
  2. Jan Hula (10 papers)
  3. Patrick Xia (26 papers)
  4. Raghavendra Pappagari (11 papers)
  5. R. Thomas McCoy (33 papers)
  6. Roma Patel (16 papers)
  7. Najoung Kim (28 papers)
  8. Ian Tenney (21 papers)
  9. Yinghui Huang (13 papers)
  10. Katherin Yu (1 paper)
  11. Shuning Jin (4 papers)
  12. Berlin Chen (53 papers)
  13. Benjamin Van Durme (173 papers)
  14. Edouard Grave (56 papers)
  15. Ellie Pavlick (66 papers)
  16. Samuel R. Bowman (103 papers)
Citations (27)