Papers
Topics
Authors
Recent
Search
2000 character limit reached

To Pretrain or Not to Pretrain: Examining the Benefits of Pretraining on Resource Rich Tasks

Published 15 Jun 2020 in cs.CL, cs.LG, and stat.ML | (2006.08671v1)

Abstract: Pretraining NLP models with variants of Masked LLM (MLM) objectives has recently led to a significant improvements on many tasks. This paper examines the benefits of pretrained models as a function of the number of training samples used in the downstream task. On several text classification tasks, we show that as the number of training examples grow into the millions, the accuracy gap between finetuning BERT-based model and training vanilla LSTM from scratch narrows to within 1%. Our findings indicate that MLM-based models might reach a diminishing return point as the supervised data size increases significantly.

Citations (24)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.