Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Text Style Transfer using Language Models as Discriminators (1805.11749v3)

Published 30 May 2018 in cs.CL

Abstract: Binary classifiers are often employed as discriminators in GAN-based unsupervised style transfer systems to ensure that transferred sentences are similar to sentences in the target domain. One difficulty with this approach is that the error signal provided by the discriminator can be unstable and is sometimes insufficient to train the generator to produce fluent language. In this paper, we propose a new technique that uses a target domain LLM as the discriminator, providing richer and more stable token-level feedback during the learning process. We train the generator to minimize the negative log likelihood (NLL) of generated sentences, evaluated by the LLM. By using a continuous approximation of discrete sampling under the generator, our model can be trained using back-propagation in an end- to-end fashion. Moreover, our empirical results show that when using a LLM as a structured discriminator, it is possible to forgo adversarial steps during training, making the process more stable. We compare our model with previous work using convolutional neural networks (CNNs) as discriminators and show that our approach leads to improved performance on three tasks: word substitution decipherment, sentiment modification, and related language translation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zichao Yang (27 papers)
  2. Zhiting Hu (75 papers)
  3. Chris Dyer (91 papers)
  4. Eric P. Xing (192 papers)
  5. Taylor Berg-Kirkpatrick (106 papers)
Citations (267)

Summary

Unsupervised Text Style Transfer using LLMs as Discriminators

The paper addresses the challenge of unsupervised text style transfer by proposing a novel approach that leverages LLMs as structured discriminators. This method improves upon traditional GAN-based systems that utilize binary classifiers which often provide unstable training signals and insufficient feedback for generating coherent sentences. The researchers demonstrate how using LLMs can enhance training stability and provide more granular token-level feedback.

Methodology Overview

The core of the proposed approach lies in using a LLM to evaluate the fluency of transferred sentences by minimizing the negative log-likelihood (NLL). This replaces the standard adversarial binary classifier with a LLM to act as a discriminator. The training process involves minimizing the NLL of generated sentences with the LLM scoring them, leveraging a continuous approximation approach with Gumbel-softmax to enable end-to-end backpropagation.

Unlike conventional methods that rely on binary CNN-based classifiers for style differentiations, this approach eschews adversarial training steps that typically require negative sampling, which can lead to instability. The LLM inherently provides stable scoring by assigning probabilities directly based on token-level assessments rather than relying on differentiating between "real" and "fake" sentences in a binary manner.

Empirical Evaluation

The proposed model was tested across three diverse tasks: word substitution decipherment, sentiment modification, and translation between related languages (e.g., Serbian to Bosnian, and simplified to traditional Chinese). Experimental results consistently showed superior performance compared to state-of-the-art systems that employ CNN-based discriminators or classifiers for style transfer.

  1. Word Substitution Decipherment: The LLMs showed significant improvements, especially when less than 100% token substitution was applied, achieving higher BLEU scores without adversarial training.
  2. Sentiment Modification: The model not only preserved content and transformed sentiment accurately but also resulted in sentences with improved fluency, judged based on BLEU scores and perplexity metrics compared to contemporaneous methods.
  3. Related Language Translation: The model performed well on simpler transformations (e.g., zh-CN to zh-TW), showcasing the robustness of the LLM discriminator in handling quite different linguistic tasks.

Implications and Future Directions

The use of LLMs as structured discriminators suggests a paradigm shift in unsupervised text style transfer, offering an alternative to the instability issues found in GANs with binary classifiers. This approach simplifies the model architecture while improving performance across multiple linguistic domains. The ability to omit adversarial negative sampling contributes to more straightforward, stable, and efficient training processes.

Future research might explore extending this framework to semi-supervised scenarios where minimal parallel data may be available, examining how LLMs can further bridge the gap between supervised and unsupervised text generation tasks. Integrating complementary methods such as back-translation could potentially enhance style transfer capacity particularly in more complex and nuanced style changes. Additionally, improvements to the underlying LLM architectures could yield better representations that align even closer with the desired attributes and content preservation, making this a promising direction for natural language generation research.