Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Synthetic Data Generation in Low-Resource Settings via Fine-Tuning of Large Language Models (2310.01119v2)

Published 2 Oct 2023 in cs.CL and cs.LG

Abstract: The in-context learning ability of LLMs enables them to generalize to novel downstream tasks with relatively few labeled examples. However, they require enormous computational resources to be deployed. Alternatively, smaller models can solve specific tasks if fine-tuned with enough labeled examples. These examples, however, are expensive to obtain. In pursuit of the best of both worlds, we study synthetic data generation of fine-tuning training data via fine-tuned teacher LLMs to improve the downstream performance of much smaller models. In four text classification and two text generation tasks, we find that both data generation and annotation dramatically improve the respective downstream model's performance, occasionally necessitating only a minor fraction of the original training dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Jean Kaddour (18 papers)
  2. Qi Liu (485 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.