Papers
Topics
Authors
Recent
Search
2000 character limit reached

Reformulating Unsupervised Style Transfer as Paraphrase Generation

Published 12 Oct 2020 in cs.CL | (2010.05700v1)

Abstract: Modern NLP defines the task of style transfer as modifying the style of a given sentence without appreciably changing its semantics, which implies that the outputs of style transfer systems should be paraphrases of their inputs. However, many existing systems purportedly designed for style transfer inherently warp the input's meaning through attribute transfer, which changes semantic properties such as sentiment. In this paper, we reformulate unsupervised style transfer as a paraphrase generation problem, and present a simple methodology based on fine-tuning pretrained LLMs on automatically generated paraphrase data. Despite its simplicity, our method significantly outperforms state-of-the-art style transfer systems on both human and automatic evaluations. We also survey 23 style transfer papers and discover that existing automatic metrics can be easily gamed and propose fixed variants. Finally, we pivot to a more real-world style transfer setting by collecting a large dataset of 15M sentences in 11 diverse styles, which we use for an in-depth analysis of our system.

Citations (224)

Summary

  • The paper reformulates unsupervised style transfer as paraphrase generation, introducing 'strap,' a simple unsupervised method using inverse paraphrasing.
  • Using a robust joint evaluation metric, the study shows strap improves semantic fidelity and fluency, outperforming SOTA models on style transfer tasks.
  • The paper introduces the Corpus of Diverse Styles (CDS) dataset and discusses implications for text simplification or data augmentation with simplified models.

Reformulating Unsupervised Style Transfer as Paraphrase Generation

The paper "Reformulating Unsupervised Style Transfer as Paraphrase Generation" introduces a novel approach to the task of text style transfer. This research, undertaken by Kalpesh Krishna, John Wieting, and Mohit Iyyer, proposes a reformulation of style transfer as a controlled paraphrase generation problem. The authors suggest a departure from traditional attribute transfer methods, which often distort input semantics, towards an approach that better preserves original meaning by treating style transfer as paraphrase generation.

Methodology

The authors introduce a method named 'Style Transfer via Paraphrasing' (strap), which operates in an unsupervised learning setting. The approach consists of three main steps:

  1. Pseudo-parallel Data Creation: Sentences from different styles are processed through a diverse paraphrase model to generate paraphrased sentences. This process essentially normalizes sentences by reducing stylistic markers.
  2. Inverse Paraphrasing: Style-specific inverse paraphrase models are then trained to reconstruct the original stylized sentences from these paraphrases. The training involves fine-tuning a pretrained GPT-2 model to handle diverse paraphrasing tasks effectively.
  3. Style Transfer Application: The inverse paraphraser is applied to perform style transfer by utilizing the trained models, thus converting input sentences into a desired target style.

Notably, strap does not require any parallel data, reinforcement learning, or complex modeling paradigms, which are often unstable and challenging to reproduce.

Evaluation and Results

The authors critically assess current evaluation metrics for style transfer and identify significant shortcomings, particularly how existing metrics can be gamed. In response, they propose a more robust joint evaluation method that combines transfer accuracy, semantic similarity, and fluency at a sentence level. Strap demonstrated significant performance improvements over state-of-the-art models on standard datasets for formality transfer and Shakespearean language style tasks. Specifically, strap achieved higher semantic similarity and fluency while maintaining competitive style transfer accuracy.

New Dataset

To test real-world applicability, the study introduces the Corpus of Diverse Styles (CDS), a benchmark dataset comprising 15 million sentences across 11 diverse styles, including Tweets, Shakespearean English, and James Joyce's works. This dataset enables testing across a broader range of style transfer tasks and contributes to the future advancement of stylistic transformation research.

Implications and Future Directions

The implications of this work are profound, especially in terms of simplifying models and improving semantic fidelity in style transfer tasks. By using pretrained LLMs and treating style transfer as paraphrase generation, this approach opens opportunities for applications in author obfuscation, text simplification, and data augmentation without compromising semantic content.

Looking forward, research could explore the applicability of this method to style-transfer tasks at larger textual scales, like paragraphs or entire documents, or transfer styles that are not represented in training data, using a few exemplars as references during inference. Additionally, integrating few-shot learning capabilities into the strap framework might enhance its adaptability to diverse and unseen styles, ultimately broadening the scope of automated style transformation in the field of natural language processing.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.