Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation (2112.12731v1)

Published 23 Dec 2021 in cs.CL

Abstract: Pre-trained LLMs have achieved state-of-the-art results in various NLP tasks. GPT-3 has shown that scaling up pre-trained LLMs can further exploit their enormous potential. A unified framework named ERNIE 3.0 was recently proposed for pre-training large-scale knowledge enhanced models and trained a model with 10 billion parameters. ERNIE 3.0 outperformed the state-of-the-art models on various NLP tasks. In order to explore the performance of scaling up ERNIE 3.0, we train a hundred-billion-parameter model called ERNIE 3.0 Titan with up to 260 billion parameters on the PaddlePaddle platform. Furthermore, we design a self-supervised adversarial loss and a controllable LLMing loss to make ERNIE 3.0 Titan generate credible and controllable texts. To reduce the computation overhead and carbon emission, we propose an online distillation framework for ERNIE 3.0 Titan, where the teacher model will teach students and train itself simultaneously. ERNIE 3.0 Titan is the largest Chinese dense pre-trained model so far. Empirical results show that the ERNIE 3.0 Titan outperforms the state-of-the-art models on 68 NLP datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (29)
  1. Shuohuan Wang (30 papers)
  2. Yu Sun (226 papers)
  3. Yang Xiang (187 papers)
  4. Zhihua Wu (24 papers)
  5. Siyu Ding (6 papers)
  6. Weibao Gong (5 papers)
  7. Shikun Feng (37 papers)
  8. Junyuan Shang (15 papers)
  9. Yanbin Zhao (14 papers)
  10. Chao Pang (23 papers)
  11. Jiaxiang Liu (39 papers)
  12. Xuyi Chen (9 papers)
  13. Yuxiang Lu (26 papers)
  14. Weixin Liu (12 papers)
  15. Xi Wang (275 papers)
  16. Yangfan Bai (2 papers)
  17. Qiuliang Chen (1 paper)
  18. Li Zhao (150 papers)
  19. Shiyong Li (24 papers)
  20. Peng Sun (210 papers)
Citations (71)
X Twitter Logo Streamline Icon: https://streamlinehq.com