Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FreeLM: Fine-Tuning-Free Language Model (2305.01616v1)

Published 2 May 2023 in cs.CL and cs.AI

Abstract: Pre-trained LLMs (PLMs) have achieved remarkable success in NLP tasks. Despite the great success, mainstream solutions largely follow the pre-training then finetuning paradigm, which brings in both high deployment costs and low training efficiency. Nevertheless, fine-tuning on a specific task is essential because PLMs are only pre-trained with language signal from large raw data. In this paper, we propose a novel fine-tuning-free strategy for LLMs, to consider both language signal and teacher signal. Teacher signal is an abstraction of a battery of downstream tasks, provided in a unified proposition format. Trained with both language and strong task-aware teacher signals in an interactive manner, our FreeLM model demonstrates strong generalization and robustness. FreeLM outperforms large models e.g., GPT-3 and InstructGPT, on a range of language understanding tasks in experiments. FreeLM is much smaller with 0.3B parameters, compared to 175B in these models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xiang Li (1003 papers)
  2. Xin Jiang (242 papers)
  3. Xuying Meng (18 papers)
  4. Aixin Sun (99 papers)
  5. Yequan Wang (44 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.