Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Encoder-Agnostic Adaptation for Conditional Language Generation (1908.06938v2)

Published 19 Aug 2019 in cs.CL

Abstract: Large pretrained LLMs have changed the way researchers approach discriminative natural language understanding tasks, leading to the dominance of approaches that adapt a pretrained model for arbitrary downstream tasks. However it is an open-question how to use similar techniques for language generation. Early results in the encoder-agnostic setting have been mostly negative. In this work we explore methods for adapting a pretrained LLM to arbitrary conditional input. We observe that pretrained transformer models are sensitive to large parameter changes during tuning. We therefore propose an adaptation that directly injects arbitrary conditioning into self attention, an approach we call pseudo self attention. Through experiments on four diverse conditional text generation tasks we show that this encoder-agnostic technique outperforms strong baselines, produces coherent generations, and is data efficient.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zachary M. Ziegler (6 papers)
  2. Luke Melas-Kyriazi (22 papers)
  3. Sebastian Gehrmann (48 papers)
  4. Alexander M. Rush (115 papers)
Citations (56)