Papers
Topics
Authors
Recent
Search
2000 character limit reached

Not all layers are equally as important: Every Layer Counts BERT

Published 3 Nov 2023 in cs.CL | (2311.02265v2)

Abstract: This paper introduces a novel modification of the transformer architecture, tailored for the data-efficient pretraining of LLMs. This aspect is evaluated by participating in the BabyLM challenge, where our solution won both the strict and strict-small tracks. Our approach allows each transformer layer to select which outputs of previous layers to process. The empirical results verify the potential of this simple modification and show that not all layers are equally as important.

Citations (9)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 25 likes about this paper.