Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Alignment: Improving Alignment of Cultural Values in LLMs via In-Context Learning (2408.16482v1)

Published 29 Aug 2024 in cs.CL

Abstract: Improving the alignment of LLMs with respect to the cultural values that they encode has become an increasingly important topic. In this work, we study whether we can exploit existing knowledge about cultural values at inference time to adjust model responses to cultural value probes. We present a simple and inexpensive method that uses a combination of in-context learning (ICL) and human survey data, and show that we can improve the alignment to cultural values across 5 models that include both English-centric and multilingual LLMs. Importantly, we show that our method could prove useful in test languages other than English and can improve alignment to the cultural values that correspond to a range of culturally diverse countries.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Rochelle Choenni (17 papers)
  2. Ekaterina Shutova (52 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.