Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Disentangled Text Representation Learning with Information-Theoretic Guidance (2006.00693v3)

Published 1 Jun 2020 in cs.LG and stat.ML

Abstract: Learning disentangled representations of natural language is essential for many NLP tasks, e.g., conditional text generation, style transfer, personalized dialogue systems, etc. Similar problems have been studied extensively for other forms of data, such as images and videos. However, the discrete nature of natural language makes the disentangling of textual representations more challenging (e.g., the manipulation over the data space cannot be easily achieved). Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text, without any supervision on semantics. A new mutual information upper bound is derived and leveraged to measure dependence between style and content. By minimizing this upper bound, the proposed method induces style and content embeddings into two independent low-dimensional spaces. Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation in terms of content and style preservation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Pengyu Cheng (23 papers)
  2. Martin Renqiang Min (44 papers)
  3. Dinghan Shen (34 papers)
  4. Christopher Malon (14 papers)
  5. Yizhe Zhang (127 papers)
  6. Yitong Li (95 papers)
  7. Lawrence Carin (203 papers)
Citations (84)

Summary

We haven't generated a summary for this paper yet.