Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Baby's CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models (2308.01684v2)

Published 3 Aug 2023 in cs.CL

Abstract: LLMs demonstrate remarkable performance on a variety of natural language understanding (NLU) tasks, primarily due to their in-context learning ability. This ability could be applied to building babylike models, i.e. models at small scales, improving training efficiency. In this paper, we propose a "CoThought" pipeline, which efficiently trains smaller "baby" LLMs (BabyLMs) by leveraging the Chain of Thought prompting of LLMs. Our pipeline restructures a dataset of less than 100M in size using GPT-3.5-turbo, transforming it into task-oriented, human-readable texts that are comparable to the school texts for language learners. The BabyLM is then pretrained on this restructured dataset in a RoBERTa fashion. In evaluations across 4 benchmarks, our BabyLM outperforms the vanilla RoBERTa in 10 linguistic, NLU, and question-answering tasks by more than 3 points, showing a superior ability to extract contextual information. These results suggest that compact LMs pretrained on small, LLM-restructured data can better understand tasks and achieve improved performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zheyu Zhang (23 papers)
  2. Han Yang (61 papers)
  3. Bolei Ma (18 papers)
  4. David RĂ¼gamer (74 papers)
  5. Ercong Nie (25 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.