Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Soft Prompt Tuning for Cross-Lingual Transfer: When Less is More (2402.03782v1)

Published 6 Feb 2024 in cs.CL and cs.AI

Abstract: Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting pre-trained LLMs (PLMs) to specific tasks by inserting learnable embeddings, or soft prompts, at the input layer of the PLM, without modifying its parameters. This paper investigates the potential of SPT for cross-lingual transfer. Unlike previous studies on SPT for cross-lingual transfer that often fine-tune both the soft prompt and the model parameters, we adhere to the original intent of SPT by keeping the model parameters frozen and only training the soft prompt. This does not only reduce the computational cost and storage overhead of full-model fine-tuning, but we also demonstrate that this very parameter efficiency intrinsic to SPT can enhance cross-lingual transfer performance to linguistically distant languages. Moreover, we explore how different factors related to the prompt, such as the length or its reparameterization, affect cross-lingual transfer performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fred Philippy (6 papers)
  2. Siwen Guo (6 papers)
  3. Shohreh Haddadan (4 papers)
  4. Cedric Lothritz (8 papers)
  5. Jacques Klein (89 papers)
  6. Tegawendé F. Bissyandé (82 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com