Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models (2310.12818v1)

Published 19 Oct 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Parameter-shared pre-trained LLMs (PLMs) have emerged as a successful approach in resource-constrained environments, enabling substantial reductions in model storage and memory costs without significant performance compromise. However, it is important to note that parameter sharing does not alleviate computational burdens associated with inference, thus impeding its practicality in situations characterized by limited stringent latency requirements or computational resources. Building upon neural ordinary differential equations (ODEs), we introduce a straightforward technique to enhance the inference efficiency of parameter-shared PLMs. Additionally, we propose a simple pre-training technique that leads to fully or partially shared models capable of achieving even greater inference acceleration. The experimental results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs, providing novel insights into more efficient utilization of parameter-shared models in resource-constrained settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Weize Chen (34 papers)
  2. Xiaoyue Xu (2 papers)
  3. Xu Han (270 papers)
  4. Yankai Lin (125 papers)
  5. Ruobing Xie (97 papers)
  6. Zhiyuan Liu (433 papers)
  7. Maosong Sun (337 papers)
  8. Jie Zhou (687 papers)