Incorporating Stylistic Lexical Preferences in Generative Language Models (2010.11553v1)
Abstract: While recent advances in LLMing have resulted in powerful generation models, their generation style remains implicitly dependent on the training data and can not emulate a specific target style. Leveraging the generative capabilities of a transformer-based LLMs, we present an approach to induce certain target-author attributes by incorporating continuous multi-dimensional lexical preferences of an author into generative LLMs. We introduce rewarding strategies in a reinforcement learning framework that encourages the use of words across multiple categorical dimensions, to varying extents. Our experiments demonstrate that the proposed approach can generate text that distinctively aligns with a given target author's lexical style. We conduct quantitative and qualitative comparisons with competitive and relevant baselines to illustrate the benefits of the proposed approach.
- Hrituraj Singh (8 papers)
- Gaurav Verma (34 papers)
- Balaji Vasan Srinivasan (33 papers)