Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human-Machine Collaboration Approaches to Build a Dialogue Dataset for Hate Speech Countering (2211.03433v1)

Published 7 Nov 2022 in cs.CL and cs.CY

Abstract: Fighting online hate speech is a challenge that is usually addressed using Natural Language Processing via automatic detection and removal of hate content. Besides this approach, counter narratives have emerged as an effective tool employed by NGOs to respond to online hate on social media platforms. For this reason, Natural Language Generation is currently being studied as a way to automatize counter narrative writing. However, the existing resources necessary to train NLG models are limited to 2-turn interactions (a hate speech and a counter narrative as response), while in real life, interactions can consist of multiple turns. In this paper, we present a hybrid approach for dialogical data collection, which combines the intervention of human expert annotators over machine generated dialogues obtained using 19 different configurations. The result of this work is DIALOCONAN, the first dataset comprising over 3000 fictitious multi-turn dialogues between a hater and an NGO operator, covering 6 targets of hate.

Analysis of "Human-Machine Collaboration Approaches to Build a Dialogue Dataset for Hate Speech Countering"

The paper "Human-Machine Collaboration Approaches to Build a Dialogue Dataset for Hate Speech Countering" addresses the significant challenge of countering online hate speech using NLP techniques. While traditional approaches focus predominantly on detection and removal of hateful content, the paper explores the use of Natural Language Generation (NLG) for creating counter-narratives (CNs), which are strategic responses aimed at combating hate speech with constructive dialogues.

The authors present a novel dataset named DIALOCONAN, which encompasses over 3,000 multi-turn dialogues between fictional 'haters' and Non-Governmental Organization (NGO) operators. These interactions span six different hate targets, including vulnerable groups such as LGBT+, migrants, and religious communities, reflecting realistic online conflicts. This effort represents a departure from existing resources that predominantly consist of simplistic two-turn dialogues, thus, it fills a crucial gap needed for training more robust dialogue systems.

Methodology and Experimentation

The authors implement a hybrid methodology combining both human expertise and machine algorithms to curate the dataset. The process comprises three core sessions focusing on dialogue structure and wording, leveraging diverse techniques ranging from simple concatenation of existing HS/CN pairs to paraphrasing and complete dialogue generation using pre-trained LLMs.

  1. Session 1: Structurally Novel Dialogues - The researchers explore multiple strategies to concatenate existing hate speech and counter-narrative pairs from prior datasets into coherent multi-turn dialogues. Various connection strategies, including random pairing, similarity measurement (Jaccard and cosine similarity), and keyword matching, are employed to maximize semantic coherence either globally (among hates) or locally (between consecutive HS and CN turns).
  2. Session 2: Wording Novelty through Paraphrasing - Here, the paper shifts focus to enhancing lexical diversity. Selected paraphrasing models are tasked with rephrasing counter-narratives while maintaining their original meaning, employing tools such as Protaugment and style-based paraphrasing frameworks. The success of this session lies in its ability to produce diverse linguistic expressions while preserving structural integrity.
  3. Session 3: Autonomous Generation of Dialogues - Utilizing dialogue-generation models like DialoGPT and T5, the researchers create dialogues from scratch. The outputs are then rigorously edited by human annotators. This session highlights the balance between creativity facilitated by LLMs and the necessity of human oversight to curtail biases and inaccuracies inherent to machine-generated content.

Evaluation and Implications

The evaluation metrics encompass post-editing effort (HTER), repetition rate, turn swaps, and novelty measures. Results indicate the superior efficiency of using existing pairs for dialogue structuring, while the generation session afforded the highest level of lexical novelty. Moreover, the paper underscores the symbiotic potential of human-machine collaboration in generating high-quality datasets that simultaneously enhance efficiency and maintain the qualitative richness necessary for machine training.

The implications of this research are twofold. On a practical level, the dataset provides a robust resource for training more sophisticated NLG models capable of engaging in meaningful anti-hate speech interactions. Theoretically, it offers insights into designing hybrid systems where human expertise is leveraged to guide machine learning processes, ensuring generated content is aligned with societal values and factual reality.

Future Directions

Future research could explore multilingual extensions, given that most datasets focus predominantly on English. Another avenue could be the integration of real-time counter-narrative deployment systems based on the generated dialogue structures, facilitating immediate interventions in real-world scenarios. Additionally, as models improve in handling nuanced conversational contexts, there is potential to explore real-world deployment for NGOs actively engaged in online advocacy.

The paper presents a comprehensive framework for hybrid data collection strategies tailored for addressive counter-narratives, solidifying its position as a valuable tool for both the research community and organizations advocating for digital civility.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Helena Bonaldi (6 papers)
  2. Sara Dellantonio (1 paper)
  3. Serra Sinem Tekiroglu (10 papers)
  4. Marco Guerini (40 papers)
Citations (34)