Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

KronA: Parameter Efficient Tuning with Kronecker Adapter (2212.10650v1)

Published 20 Dec 2022 in cs.CL

Abstract: Fine-tuning a Pre-trained LLM (PLM) on a specific downstream task has been a well-known paradigm in Natural Language Processing. However, with the ever-growing size of PLMs, training the entire model on several downstream tasks becomes very expensive and resource-hungry. Recently, different Parameter Efficient Tuning (PET) techniques are proposed to improve the efficiency of fine-tuning PLMs. One popular category of PET methods is the low-rank adaptation methods which insert learnable truncated SVD modules into the original model either sequentially or in parallel. However, low-rank decomposition suffers from limited representation power. In this work, we address this problem using the Kronecker product instead of the low-rank representation. We introduce KronA, a Kronecker product-based adapter module for efficient fine-tuning of Transformer-based PLMs. We apply the proposed methods for fine-tuning T5 on the GLUE benchmark to show that incorporating the Kronecker-based modules can outperform state-of-the-art PET methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ali Edalati (6 papers)
  2. Marzieh Tahaei (8 papers)
  3. Ivan Kobyzev (23 papers)
  4. Vahid Partovi Nia (40 papers)
  5. James J. Clark (32 papers)
  6. Mehdi Rezagholizadeh (78 papers)
Citations (71)