Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-trained Language Models (2310.16240v1)

Published 24 Oct 2023 in cs.CL

Abstract: In this work, we propose a method that combines two popular research areas by injecting linguistic structures into pre-trained LLMs in the parameter-efficient fine-tuning (PEFT) setting. In our approach, parallel adapter modules encoding different linguistic structures are combined using a novel Mixture-of-Linguistic-Experts architecture, where Gumbel-Softmax gates are used to determine the importance of these modules at each layer of the model. To reduce the number of parameters, we first train the model for a fixed small number of steps before pruning the experts based on their importance scores. Our experiment results with three different pre-trained models show that our approach can outperform state-of-the-art PEFT methods with a comparable number of parameters. In addition, we provide additional analysis to examine the experts selected by each model at each layer to provide insights for future studies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Raymond Li (24 papers)
  2. Gabriel Murray (6 papers)
  3. Giuseppe Carenini (52 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.