Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
99 tokens/sec
Gemini 2.5 Pro Premium
56 tokens/sec
GPT-5 Medium
26 tokens/sec
GPT-5 High Premium
20 tokens/sec
GPT-4o
106 tokens/sec
DeepSeek R1 via Azure Premium
99 tokens/sec
GPT OSS 120B via Groq Premium
507 tokens/sec
Kimi K2 via Groq Premium
213 tokens/sec
2000 character limit reached

Localized Cultural Knowledge is Conserved and Controllable in Large Language Models (2504.10191v1)

Published 14 Apr 2025 in cs.CL and cs.AI

Abstract: Just as humans display language patterns influenced by their native tongue when speaking new languages, LLMs often default to English-centric responses even when generating in other languages. Nevertheless, we observe that local cultural information persists within the models and can be readily activated for cultural customization. We first demonstrate that explicitly providing cultural context in prompts significantly improves the models' ability to generate culturally localized responses. We term the disparity in model performance with versus without explicit cultural context the explicit-implicit localization gap, indicating that while cultural knowledge exists within LLMs, it may not naturally surface in multilingual interactions if cultural context is not explicitly provided. Despite the explicit prompting benefit, however, the answers reduce in diversity and tend toward stereotypes. Second, we identify an explicit cultural customization vector, conserved across all non-English languages we explore, which enables LLMs to be steered from the synthetic English cultural world-model toward each non-English cultural world. Steered responses retain the diversity of implicit prompting and reduce stereotypes to dramatically improve the potential for customization. We discuss the implications of explicit cultural customization for understanding the conservation of alternative cultural world models within LLMs, and their controllable utility for translation, cultural customization, and the possibility of making the explicit implicit through soft control for expanded LLM function and appeal.

Summary

  • The paper identifies a significant explicit–implicit localization gap, with performance differences often exceeding 10% in culturally nuanced tasks.
  • The paper introduces activation patching and steering vectors to pinpoint and guide cultural context within specific model layers.
  • The paper demonstrates that a universal cultural vector can enhance cross-lingual transferability and improve culturally appropriate responses.

Localized Cultural Knowledge in LLMs

The paper "Localized Cultural Knowledge is Conserved and Controllable in LLMs" (2504.10191) thoroughly investigates the retention and activation of cultural nuances in multilingual LLMs. It distinguishes between explicit and implicit localization and explores mechanisms for bridging the explicit--implicit localization performance gap observed when LLMs are prompted without specific cultural context.

Explicit vs. Implicit Localization

LLMs, similar to bilingual speakers, tend to default to responses rooted in English-centric paradigms even when operating in other languages. This default behavior indicates an implicit bias stemming from models trained predominantly on English data. The paper introduces the explicit--implicit localization gap, characterized by the performance disparity when models tackle culturally nuanced tasks without explicit context. Providing cultural context in prompts improves localized responses, yet at the cost of increased response homogeneity and stereotyping, challenging the variety and depth of diversity we aim for in model outputs. Figure 1

Figure 1: The explicit--implicit localization gap revealed through varied chat interaction settings.

Evaluation Methodology

The experimental framework devised by the researchers consists of a benchmark testing across diverse languages, namely English, Turkish, Russian, French, and Bengali. Different datasets encompass cultural identifiers like names, cities, tasks, and culturally distilled queries. Performance is measured through explicit and implicit setups, highlighting the localization gap. Figure 2

Figure 2: Heatmap showing the explicit--implicit localization gap across models and languages.

Key findings include:

  • A significant localization gap, often exceeding 10% performance difference, which accentuates more in smaller models.
  • Improved performance when a cultural reference prefix precedes the input query, indicating that even a single culturally relevant token can aid in steering the model's responses.

Mechanistic Interpretability

To understand where cultural nuances are encoded within models, the paper utilizes activation patching. This technique identifies layers within the model where cultural context starts influencing the output probabilities. Notably, layers 23 and 30 are crucial for consolidating a world model which becomes culturally targeted at these specific points. Figure 3

Figure 3

Figure 3: Activation patching results identifying where localized token probabilities peak.

Steering Localization with Activation Vectors

The research introduces steering vectors to address the explicit--implicit localization gap without explicit prompting. By calculating vectors that align model outputs with specific cultural contexts, it effectively guides the model to generate more culturally appropriate answers. These vectors demonstrate cross-linguistic transferability and task generalization capabilities. Figure 4

Figure 4: Steering results across several language layers showing improved localization performance.

Universal Cultural Vector

An intriguing component of their investigation is steering using a universal cultural vector. This aims to universally adapt outputs to align with the prompt language, suggesting that even complex cultural nuances can be generalized across languages within LLMs. Figure 5

Figure 5: Application of universal steering vectors reveals cross-cultural adaptability improvements.

Implications and Future Work

The results have profound implications for deploying LLMs globally, indicating that while implicit training on language and cultural context provides baseline performance, explicit contextualization remains necessary for capturing depth in local culture. Future research avenues may explore parameter-efficient finetuning methods or enhancing universal steering setups to refine the cultural specificity in models further.

Conclusion

This investigation sheds light on pivotal components of cultural localization, advocating for a balanced, informed approach to leveraging both explicit and implicit mechanisms for context within multilingual applications. It serves as both a practical method for improving model deployment across cultures and a conceptual framework for understanding how LLMs navigate the rich landscape of global cultural knowledge.

X Twitter Logo Streamline Icon: https://streamlinehq.com