Fusing Sentence Embeddings Into LSTM-based Autoregressive Language Models (2208.02402v2)
Abstract: Although masked LLMs are highly performant and widely adopted by NLP practitioners, they can not be easily used for autoregressive LLMling (next word prediction and sequence probability estimation). We present an LSTM-based autoregressive LLM which uses prefix embeddings (from a pretrained masked LLM) via fusion (e.g. concatenation) to obtain a richer context representation for LLMling. We find that fusion helps reliably in lowering the perplexity (16.74 $\rightarrow$ 15.80), which is even preserved after a transfer to a dataset from a different domain than the training data. We also evaluate the best-performing fusion model by correlating its next word surprisal estimates with human reading times. Contradicting our expectation, and despite the improvement in perplexity overall, the correlation remains the same as for the baseline model. Lastly, while we focus on LLMs pre-trained on text as the sources for the fusion, our approach can be possibly extended to fuse any information represented as a fixed-size vector into an auto-regressive LLM. These include e.g. sentence external information retrieved for a knowledge base or representations of multi-modal encoders.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Collections
Sign up for free to add this paper to one or more collections.