Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Incorporating Stylistic Lexical Preferences in Generative Language Models (2010.11553v1)

Published 22 Oct 2020 in cs.CL

Abstract: While recent advances in LLMing have resulted in powerful generation models, their generation style remains implicitly dependent on the training data and can not emulate a specific target style. Leveraging the generative capabilities of a transformer-based LLMs, we present an approach to induce certain target-author attributes by incorporating continuous multi-dimensional lexical preferences of an author into generative LLMs. We introduce rewarding strategies in a reinforcement learning framework that encourages the use of words across multiple categorical dimensions, to varying extents. Our experiments demonstrate that the proposed approach can generate text that distinctively aligns with a given target author's lexical style. We conduct quantitative and qualitative comparisons with competitive and relevant baselines to illustrate the benefits of the proposed approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Hrituraj Singh (8 papers)
  2. Gaurav Verma (34 papers)
  3. Balaji Vasan Srinivasan (33 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.