Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prior Attention for Style-aware Sequence-to-Sequence Models (1806.09439v1)

Published 25 Jun 2018 in cs.CL

Abstract: We extend sequence-to-sequence models with the possibility to control the characteristics or style of the generated output, via attention that is generated a priori (before decoding) from a latent code vector. After training an initial attention-based sequence-to-sequence model, we use a variational auto-encoder conditioned on representations of input sequences and a latent code vector space to generate attention matrices. By sampling the code vector from specific regions of this latent space during decoding and imposing prior attention generated from it in the seq2seq model, output can be steered towards having certain attributes. This is demonstrated for the task of sentence simplification, where the latent code vector allows control over output length and lexical simplification, and enables fine-tuning to optimize for different evaluation metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Lucas Sterckx (5 papers)
  2. Johannes Deleu (29 papers)
  3. Chris Develder (59 papers)
  4. Thomas Demeester (76 papers)

Summary

We haven't generated a summary for this paper yet.