Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How does the pre-training objective affect what large language models learn about linguistic properties? (2203.10415v1)

Published 20 Mar 2022 in cs.CL

Abstract: Several pre-training objectives, such as masked LLMing (MLM), have been proposed to pre-train LLMs (e.g. BERT) with the aim of learning better language representations. However, to the best of our knowledge, no previous work so far has investigated how different pre-training objectives affect what BERT learns about linguistics properties. We hypothesize that linguistically motivated objectives such as MLM should help BERT to acquire better linguistic knowledge compared to other non-linguistically motivated objectives that are not intuitive or hard for humans to guess the association between the input and the label to be predicted. To this end, we pre-train BERT with two linguistically motivated objectives and three non-linguistically motivated ones. We then probe for linguistic characteristics encoded in the representation of the resulting models. We find strong evidence that there are only small differences in probing performance between the representations learned by the two different types of objectives. These surprising results question the dominant narrative of linguistically informed pre-training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Ahmed Alajrami (2 papers)
  2. Nikolaos Aletras (72 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.