Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Call for Papers -- The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus (2301.11796v1)

Published 27 Jan 2023 in cs.CL

Abstract: We present the call for papers for the BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus. This shared task is intended for participants with an interest in small scale LLMing, human language acquisition, low-resource NLP, and cognitive modeling. In partnership with CoNLL and CMCL, we provide a platform for approaches to pretraining with a limited-size corpus sourced from data inspired by the input to children. The task has three tracks, two of which restrict the training data to pre-released datasets of 10M and 100M words and are dedicated to explorations of approaches such as architectural variations, self-supervised objectives, or curriculum learning. The final track only restricts the amount of text used, allowing innovation in the choice of the data, its domain, and even its modality (i.e., data from sources other than text is welcome). We will release a shared evaluation pipeline which scores models on a variety of benchmarks and tasks, including targeted syntactic evaluations and natural language understanding.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Alex Warstadt (35 papers)
  2. Leshem Choshen (78 papers)
  3. Aaron Mueller (35 papers)
  4. Adina Williams (72 papers)
  5. Ethan Wilcox (24 papers)
  6. Chengxu Zhuang (15 papers)
Citations (42)