Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-Lingual Natural Language Generation via Pre-Training (1909.10481v3)

Published 23 Sep 2019 in cs.CL
Cross-Lingual Natural Language Generation via Pre-Training

Abstract: In this work we focus on transferring supervision signals of natural language generation (NLG) tasks between multiple languages. We propose to pretrain the encoder and the decoder of a sequence-to-sequence model under both monolingual and cross-lingual settings. The pre-training objective encourages the model to represent different languages in the shared space, so that we can conduct zero-shot cross-lingual transfer. After the pre-training procedure, we use monolingual data to fine-tune the pre-trained model on downstream NLG tasks. Then the sequence-to-sequence model trained in a single language can be directly evaluated beyond that language (i.e., accepting multi-lingual input and producing multi-lingual output). Experimental results on question generation and abstractive summarization show that our model outperforms the machine-translation-based pipeline methods for zero-shot cross-lingual generation. Moreover, cross-lingual transfer improves NLG performance of low-resource languages by leveraging rich-resource language data. Our implementation and data are available at https://github.com/CZWin32768/xnlg.

Cross-Lingual Natural Language Generation via Pre-Training

The paper "Cross-Lingual Natural Language Generation via Pre-Training," authored by Zewen Chi et al., addresses the problem of transferring supervision signals of natural language generation (NLG) tasks across multiple languages. This work is particularly focused on the challenge of extending the capabilities of NLG models trained in high-resource languages (e.g., English) to low-resource languages without the need for direct supervision in those languages.

Methodology

The authors propose a novel approach involving a cross-lingual pre-trained model (termed Xnlg) that encompasses both an encoder and a decoder, fine-tuned under monolingual and cross-lingual pre-training setups. The training utilizes several strategically designed objectives:

  1. Monolingual Masked LLMing (MLM): This task is akin to BERT's pre-training and aids in capturing rich monolingual contextual representations.
  2. Denoising Auto-Encoding (DAE): This objective assists in pre-training the encoder-decoder attention by reconstructing sentences from perturbed inputs.
  3. Cross-Lingual MLM (XMLM): Extending MLM to bilingual corpora, this task trains the model to capture cross-lingual semantic alignment within a shared representation space.
  4. Cross-Lingual Auto-Encoding (XAE): Infused with principles from machine translation, XAE facilitates language transfer, addressing potential spurious correlations between the source language and target sentences.

This pre-training paradigm allows for zero-shot cross-lingual transfer by enabling a shared semantic space and further fine-tuning on monolingual data, eventually supporting multilingual input and output without parallel data.

Experimental Results

In evaluating Xnlg, the paper focuses on two cross-lingual NLG tasks: question generation (QG) and abstractive summarization (AS). The model achieves superior performance compared to machine-translation-based pipeline methods across different evaluation metrics and settings.

  1. Question Generation: The model is tested on English-Chinese and Chinese-English language pairs for QG tasks, delivering significant improvements in BLEU-4, METEOR, and ROUGE scores over baselines like XLM and pipeline methods relying on translation systems.
  2. Abstractive Summarization: Similarly, in zero-shot summarization for French and Chinese, the Xnlg model demonstrates enhanced ROUGE scores, highlighting the robustness of the cross-lingual transfer.

The research highlights that cross-lingual pre-training can effectively enhance NLG performance in low-resource languages by leveraging knowledge from richer datasets. Additionally, the methodology mitigates issues like error propagation associated with traditional pipeline methods reliant on machine translation.

Implications and Future Work

The proposed cross-lingual NLG framework opens avenues for leveraging shared linguistic resources in multilingual settings. The authors argue for the potential application of this approach in entirely unsupervised contexts, suggesting future work could focus on improving pre-training towards fully unsupervised NLG. Furthermore, enhancements could explore more complex language pairs and the addition of more languages, potentially involving deeper models or alternative training objectives to optimize cross-lingual language mapping.

In conclusion, this work stands as a significant contribution to the field of multilingual NLP, providing a scalable and flexible architecture for NLG tasks across diverse language pairs and resource levels. As NLG applications expand globally, such innovations are critical in democratizing access to AI-driven language technologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zewen Chi (29 papers)
  2. Li Dong (154 papers)
  3. Furu Wei (291 papers)
  4. Wenhui Wang (47 papers)
  5. Xian-Ling Mao (76 papers)
  6. Heyan Huang (107 papers)
Citations (130)
Github Logo Streamline Icon: https://streamlinehq.com