Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing Social Biases in Japanese Large Language Models (2406.02050v3)

Published 4 Jun 2024 in cs.CL

Abstract: With the development of LLMs, social biases in the LLMs have become a crucial issue. While various benchmarks for social biases have been provided across languages, the extent to which Japanese LLMs exhibit social biases has not been fully investigated. In this study, we construct the Japanese Bias Benchmark dataset for Question Answering (JBBQ) based on the English bias benchmark BBQ, and analyze social biases in Japanese LLMs. The results show that while current open Japanese LLMs improve their accuracies on JBBQ by setting larger parameters, their bias scores become larger. In addition, prompts with warnings about social biases and Chain-of-Thought prompting reduce the effect of biases in model outputs, but there is room for improvement in the consistency of reasoning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hitomi Yanaka (29 papers)
  2. Ryoma Kumon (5 papers)
  3. Jie Lu (127 papers)
  4. Masashi Takeshita (6 papers)
  5. Ryo Sekizawa (3 papers)
  6. Taisei Kato (5 papers)
  7. Hiromi Arai (9 papers)
  8. Namgi Han (6 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets