Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora (2412.05149v1)

Published 6 Dec 2024 in cs.CL

Abstract: The BabyLM Challenge is a community effort to close the data-efficiency gap between human and computational language learners. Participants compete to optimize LLM training on a fixed language data budget of 100 million words or less. This year, we released improved text corpora, as well as a vision-and-language corpus to facilitate research into cognitively plausible vision LLMs. Submissions were compared on evaluation tasks targeting grammatical ability, (visual) question answering, pragmatic abilities, and grounding, among other abilities. Participants could submit to a 10M-word text-only track, a 100M-word text-only track, and/or a 100M-word and image multimodal track. From 31 submissions employing diverse methods, a hybrid causal-masked LLM architecture outperformed other approaches. No submissions outperformed the baselines in the multimodal track. In follow-up analyses, we found a strong relationship between training FLOPs and average performance across tasks, and that the best-performing submissions proposed changes to the training data, training objective, and model architecture. This year's BabyLM Challenge shows that there is still significant room for innovation in this setting, in particular for image-text modeling, but community-driven research can yield actionable insights about effective strategies for small-scale LLMing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Michael Y. Hu (15 papers)
  2. Aaron Mueller (35 papers)
  3. Candace Ross (25 papers)
  4. Adina Williams (72 papers)
  5. Tal Linzen (73 papers)
  6. Chengxu Zhuang (15 papers)
  7. Ryan Cotterell (226 papers)
  8. Leshem Choshen (78 papers)
  9. Alex Warstadt (35 papers)
  10. Ethan Gotlieb Wilcox (9 papers)