Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MTG: A Benchmark Suite for Multilingual Text Generation (2108.07140v2)

Published 13 Aug 2021 in cs.CL

Abstract: We introduce MTG, a new benchmark suite for training and evaluating multilingual text generation. It is the first-proposed multilingual multiway text generation dataset with the largest human-annotated data (400k). It includes four generation tasks (story generation, question generation, title generation and text summarization) across five languages (English, German, French, Spanish and Chinese). The multiway setup enables testing knowledge transfer capabilities for a model across languages and tasks. Using MTG, we train and analyze several popular multilingual generation models from different aspects. Our benchmark suite fosters model performance enhancement with more human-annotated parallel data. It provides comprehensive evaluations with diverse generation scenarios. Code and data are available at \url{https://github.com/zide05/MTG}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yiran Chen (176 papers)
  2. Zhenqiao Song (14 papers)
  3. Xianze Wu (2 papers)
  4. Danqing Wang (37 papers)
  5. Jingjing Xu (80 papers)
  6. Jiaze Chen (17 papers)
  7. Hao Zhou (351 papers)
  8. Lei Li (1293 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com