FiLM: Fill-in Language Models for Any-Order Generation (2310.09930v1)
Abstract: LLMs have become the backbone of today's AI systems. However, their predominant left-to-right generation limits the use of bidirectional context, which is essential for tasks that involve filling text in the middle. We propose the Fill-in LLM (FiLM), a new LLMing approach that allows for flexible generation at any position without adhering to a specific generation order. Its training extends the masked LLMing objective by adopting varying mask probabilities sampled from the Beta distribution to enhance the generative capabilities of FiLM. During inference, FiLM can seamlessly insert missing phrases, sentences, or paragraphs, ensuring that the outputs are fluent and are coherent with the surrounding context. In both automatic and human evaluations, FiLM outperforms existing infilling methods that rely on left-to-right LLMs trained on rearranged text segments. FiLM is easy to implement and can be either trained from scratch or fine-tuned from a left-to-right LLM. Notably, as the model size grows, FiLM's perplexity approaches that of strong left-to-right LLMs of similar sizes, indicating FiLM's scalability and potential as a LLM.