Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fast-ELECTRA for Efficient Pre-training (2310.07347v1)

Published 11 Oct 2023 in cs.CL, cs.AI, and cs.LG

Abstract: ELECTRA pre-trains LLMs by detecting tokens in a sequence that have been replaced by an auxiliary model. Although ELECTRA offers a significant boost in efficiency, its potential is constrained by the training cost brought by the auxiliary model. Notably, this model, which is jointly trained with the main model, only serves to assist the training of the main model and is discarded post-training. This results in a substantial amount of training cost being expended in vain. To mitigate this issue, we propose Fast-ELECTRA, which leverages an existing LLM as the auxiliary model. To construct a learning curriculum for the main model, we smooth its output distribution via temperature scaling following a descending schedule. Our approach rivals the performance of state-of-the-art ELECTRA-style pre-training methods, while significantly eliminating the computation and memory cost brought by the joint training of the auxiliary model. Our method also reduces the sensitivity to hyper-parameters and enhances the pre-training stability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chengyu Dong (22 papers)
  2. Liyuan Liu (49 papers)
  3. Hao Cheng (190 papers)
  4. Jingbo Shang (141 papers)
  5. Jianfeng Gao (344 papers)
  6. Xiaodong Liu (162 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.