Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discovering Useful Sentence Representations from Large Pretrained Language Models (2008.09049v1)

Published 20 Aug 2020 in cs.CL and cs.LG

Abstract: Despite the extensive success of pretrained LLMs as encoders for building NLP systems, they haven't seen prominence as decoders for sequence generation tasks. We explore the question of whether these models can be adapted to be used as universal decoders. To be considered "universal," a decoder must have an implicit representation for any target sentence $s$, such that it can recover that sentence exactly when conditioned on its representation. For large transformer-based LLMs trained on vast amounts of English text, we investigate whether such representations can be easily discovered using standard optimization methods. We present and compare three representation injection techniques for transformer-based models and three accompanying methods which map sentences to and from this representation space. Experiments show that not only do representations exist for sentences from a variety of genres. More importantly, without needing complex optimization algorithms, our methods recover these sentences almost perfectly without fine-tuning the underlying LLM at all.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Nishant Subramani (16 papers)
  2. Nivedita Suresh (2 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.