Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Speaking Multiple Languages Affects the Moral Bias of Language Models (2211.07733v2)

Published 14 Nov 2022 in cs.CL

Abstract: Pre-trained multilingual LLMs (PMLMs) are commonly used when dealing with data from multiple languages and cross-lingual transfer. However, PMLMs are trained on varying amounts of data for each language. In practice this means their performance is often much better on English than many other languages. We explore to what extent this also applies to moral norms. Do the models capture moral norms from English and impose them on other languages? Do the models exhibit random and thus potentially harmful beliefs in certain languages? Both these issues could negatively impact cross-lingual transfer and potentially lead to harmful outcomes. In this paper, we (1) apply the MoralDirection framework to multilingual models, comparing results in German, Czech, Arabic, Chinese, and English, (2) analyse model behaviour on filtered parallel subtitles corpora, and (3) apply the models to a Moral Foundations Questionnaire, comparing with human responses from different countries. Our experiments demonstrate that, indeed, PMLMs encode differing moral biases, but these do not necessarily correspond to cultural differences or commonalities in human opinions. We release our code and models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Katharina Hämmerl (7 papers)
  2. Björn Deiseroth (16 papers)
  3. Patrick Schramowski (48 papers)
  4. Jindřich Libovický (36 papers)
  5. Constantin A. Rothkopf (16 papers)
  6. Alexander Fraser (50 papers)
  7. Kristian Kersting (205 papers)
Citations (24)