Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multilingual Pre-training with Language and Task Adaptation for Multilingual Text Style Transfer (2203.08552v1)

Published 16 Mar 2022 in cs.CL

Abstract: We exploit the pre-trained seq2seq model mBART for multilingual text style transfer. Using machine translated data as well as gold aligned English sentences yields state-of-the-art results in the three target languages we consider. Besides, in view of the general scarcity of parallel data, we propose a modular approach for multilingual formality transfer, which consists of two training strategies that target adaptation to both language and task. Our approach achieves competitive performance without monolingual task-specific parallel data and can be applied to other style transfer tasks as well as to other languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Huiyuan Lai (17 papers)
  2. Antonio Toral (35 papers)
  3. Malvina Nissim (52 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.