Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Is Multilingual BERT Fluent in Language Generation? (1910.03806v1)

Published 9 Oct 2019 in cs.CL and cs.LG

Abstract: The multilingual BERT model is trained on 104 languages and meant to serve as a universal LLM and tool for encoding sentences. We explore how well the model performs on several languages across several tasks: a diagnostic classification probing the embeddings for a particular syntactic property, a cloze task testing the LLMling ability to fill in gaps in a sentence, and a natural language generation task testing for the ability to produce coherent text fitting a given context. We find that the currently available multilingual BERT model is clearly inferior to the monolingual counterparts, and cannot in many cases serve as a substitute for a well-trained monolingual model. We find that the English and German models perform well at generation, whereas the multilingual model is lacking, in particular, for Nordic languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Samuel Rönnqvist (14 papers)
  2. Jenna Kanerva (17 papers)
  3. Tapio Salakoski (9 papers)
  4. Filip Ginter (28 papers)
Citations (70)