Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prefix Propagation: Parameter-Efficient Tuning for Long Sequences (2305.12086v2)

Published 20 May 2023 in cs.CL

Abstract: Parameter-efficient tuning aims to mitigate the large memory requirements of adapting pretrained LLMs for downstream tasks. For example, one popular method, prefix-tuning, prepends trainable tokens to sequences while freezing the rest of the model's parameters. Although such models attain comparable performance with fine-tuning when applied to sequences with short to moderate lengths, we show their inferior performance when modelling long sequences. To bridge this gap, we propose prefix-propagation, a simple but effective approach that conditions prefixes on previous hidden states. We empirically demonstrate that prefix-propagation outperforms prefix-tuning across long-document tasks, while using 50% fewer parameters. To further investigate the proposed architecture, we also show its advantage in calibration, and perform additional study on its relationship with kernel attention. To the best of our knowledge, this work is the first to focus on parameter-efficient learning for long-sequence language tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jonathan Li (62 papers)
  2. Will Aitken (3 papers)
  3. Rohan Bhambhoria (10 papers)
  4. Xiaodan Zhu (94 papers)
Citations (10)