- The paper introduces a novel computational framework that quantifies othering language using large language models on war blogs.
- It employs an Artificial Annotator Alignment process to ensure reliable data annotation and captures dynamic language shifts during key conflict events.
- The study finds significant correlations between moral framing and social attention, offering actionable insights for moderating harmful online narratives.
Analysis of Othering Language in Online Discourses Amidst the Russia-Ukraine Conflict
The paper presented explores the concept of "othering" in online discourses, particularly by examining the language used by war bloggers on Telegram during the Russia-Ukraine conflict. The research introduces a novel computational framework to understand and quantify "othering", a socio-political construct wherein outgroups are depicted as fundamentally different and often as threats to the ingroup. Unlike traditional studies focusing solely on hate speech or fear speech, this paper explores the nuanced mechanism of othering as it manifests in digital communication platforms.
Methodology
The authors propose a novel taxonomy of othering language, which includes four primary categories: Threats to Culture or Identity, Threats to Survival or Physical Security, Vilification/Villainization, and Explicit Dehumanization. Leveraging LLMs, the paper develops classifiers that can rapidly adapt to different contextual domains, allowing for detailed linguistic analysis of othering across various platforms and contexts.
For the empirical analysis, the researchers employ an Artificial Annotator Alignment process. This method involves using high-quality LLMs, such as ChatGPT-4o, to annotate datasets initially trained on human annotations. The proposed methodology ensures reliability and consistency before the annotated data is used to train an open-source LLM. The research utilized messages from pro-Russian and pro-Ukrainian Telegram channels and the Gab platform, a known haven for far-right discussions in the US, ensuring a broad spectrum of analysis.
Key Findings
The paper finds that the use of othering language is not static; it fluctuates in response to key external events, particularly in high-conflict scenarios, such as the Russia-Ukraine war. Among Russian bloggers, the amplification of othering rhetoric coincides with international military or political developments, whereas for Ukrainian bloggers, battlefield successes and failures prominently shape discourse.
Moreover, the analysis underscores a significant relationship between othering language and moral framing. This aligns with the Moralized Threat Hypothesis, which suggests that moral language often justifies or intensifies the use of exclusionary narratives. The paper reveals statistically significant correlations between the deployment of moral language and various forms of othering on digital platforms.
Another critical aspect explored is the attraction of social attention through othering narratives. Messages featuring othering content tend to attract more viewers, especially during crises, suggesting that such narratives capture public interest and potentially influence public sentiments and responses.
Practical and Theoretical Implications
The proposed framework and methodologies present significant implications for both theory and practice. On a theoretical level, the paper enriches understanding within computational social science by bridging sociological theories on othering with state-of-the-art machine learning techniques.
Practically, the classification models developed could inform moderation strategies on social media platforms to mitigate the spread of harmful narratives. Deploying these models could help identify high-risk content that promotes division and hostility, offering pathways for interventions aimed at preserving social cohesion.
Future Directions
Future research could extend the application of this framework across different conflict zones or socio-political contexts to validate its generalizability. Additionally, investigating the nuances of moralized propaganda could offer deeper insights into how information warfare tactics evolve alongside technological advancements.
In summary, this research provides a comprehensive examination of othering in online discourse, highlighting the interplay between language, morality, and social attention, and offers tools for understanding and addressing these dynamics in digital environments.