Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Do Multilingual Language Models Capture Differing Moral Norms? (2203.09904v1)

Published 18 Mar 2022 in cs.CL

Abstract: Massively multilingual sentence representations are trained on large corpora of uncurated data, with a very imbalanced proportion of languages included in the training. This may cause the models to grasp cultural values including moral judgments from the high-resource languages and impose them on the low-resource languages. The lack of data in certain languages can also lead to developing random and thus potentially harmful beliefs. Both these issues can negatively influence zero-shot cross-lingual model transfer and potentially lead to harmful outcomes. Therefore, we aim to (1) detect and quantify these issues by comparing different models in different languages, (2) develop methods for improving undesirable properties of the models. Our initial experiments using the multilingual model XLM-R show that indeed multilingual LMs capture moral norms, even with potentially higher human-agreement than monolingual ones. However, it is not yet clear to what extent these moral norms differ between languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Katharina Hämmerl (7 papers)
  2. Björn Deiseroth (16 papers)
  3. Patrick Schramowski (48 papers)
  4. Jindřich Libovický (36 papers)
  5. Alexander Fraser (50 papers)
  6. Kristian Kersting (205 papers)
Citations (11)