Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Conversational Agents to Facilitate Deliberation on Harmful Content in WhatsApp Groups (2405.20254v2)

Published 30 May 2024 in cs.HC and cs.CY

Abstract: WhatsApp groups have become a hotbed for the propagation of harmful content including misinformation, hate speech, polarizing content, and rumors, especially in Global South countries. Given the platform's end-to-end encryption, moderation responsibilities lie on group admins and members, who rarely contest such content. Another approach is fact-checking, which is unscalable, and can only contest factual content (e.g., misinformation) but not subjective content (e.g., hate speech). Drawing on recent literature, we explore deliberation -- open and inclusive discussion -- as an alternative. We investigate the role of a conversational agent in facilitating deliberation on harmful content in WhatsApp groups. We conducted semi-structured interviews with 21 Indian WhatsApp users, employing a design probe to showcase an example agent. Participants expressed the need for anonymity and recommended AI assistance to reduce the effort required in deliberation. They appreciated the agent's neutrality but pointed out the futility of deliberation in echo chamber groups. Our findings highlight design tensions for such an agent, including privacy versus group dynamics and freedom of speech in private spaces. We discuss the efficacy of deliberation using deliberative theory as a lens, compare deliberation with moderation and fact-checking, and provide design recommendations for future such systems. Ultimately, this work advances CSCW by offering insights into designing deliberative systems for combating harmful content in private group chats on social media.

Citations (2)

Summary

  • The paper presents a novel system that uses conversational agents to initiate deliberation on harmful content in WhatsApp groups.
  • It employs qualitative methods, including design probes and 21 semi-structured interviews with urban and rural Indian users.
  • Findings highlight key trade-offs in activation methods, anonymity protocols, and managing group dynamics while mitigating misinformation.

Conversational Agents to Facilitate Deliberation on Harmful Content in WhatsApp Groups

This paper explores the potential of conversational agents to facilitate the deliberation of harmful content in WhatsApp groups. Authored by Dhruv Agarwal, Farhana Shahid, and Aditya Vashistha from Cornell University, the paper explores the unique challenges and design considerations pertinent to deploying AI-driven solutions in end-to-end encrypted social media environments. Through this research, the authors aim to provide valuable insights into combating misinformation, hate speech, and other harmful content in a context where traditional moderation and fact-checking methods are inadequate.

Study Context and Motivation

WhatsApp's extensive reach, especially in the Global South, coupled with its end-to-end encryption, poses significant challenges for content moderation. Unlike open social media platforms, WhatsApp relies on users and group administrators to police harmful content, which is often ineffective. Fact-checking, another common strategy, is labor-intensive, unscalable, and limited to contesting primarily objective misinformation rather than subjective content like hate speech or polarizing messages. Recognizing these limitations, the authors propose deliberation as an alternative approach, facilitated by conversational agents.

Methodology

The paper employs a qualitative approach using design probes and semi-structured interviews. A design probe was created to illustrate the concept of a conversational agent that intervenes when harmful content is detected, initiating a deliberation process among group members. This was followed by interviews with 21 Indian WhatsApp users, both from urban and rural areas, to obtain detailed feedback on the proposed solution.

Key Findings

Design Considerations

Participants highlighted various design considerations crucial for the deployment of such agents:

  • Activation Methods: Three activation strategies were proposed—heuristics-based, AI-based, and manual activation. Heuristics-based methods were deemed potentially prone to false positives, while AI-based methods raised concerns about privacy. Manual activation, though preserving privacy, might disrupt group dynamics if users could be identified based on their activation triggers.
  • Deliberation Participation: There were trade-offs between asking all group members for their opinions versus a random subset. Asking everyone may overwhelm the system with responses, while a random subset might exclude minority opinions.
  • Duration: Balancing sufficient time for members to respond while maintaining conversational context was a critical concern. Participants suggested a hybrid approach combining time-bound and response-bound strategies.

Process and Anonymity

Participants preferred anonymous deliberation to avoid conflicts and disruptions in group dynamics. Anonymity allowed for freer expression, especially in hierarchical social contexts common in India. However, there were concerns about the misuse of anonymity leading to disrespectful comments or personal attacks.

Perceived Strengths

Participants identified several strengths in the proposed system:

  • Anonymity: Encouraged participation and honest feedback without fear of reprisal.
  • Neutrality: The agent acted as a neutral facilitator, which was appreciated for its non-confrontational approach.
  • Accountability: The process nudged users to think critically before sharing potentially harmful content.
  • Diverse Opinions: Surfacing a variety of perspectives was seen as beneficial in combating echo chambers.

Potential Pitfalls

Despite its strengths, the participants noted several pitfalls:

  • Group Dynamics: Risk of disrupting social harmony, especially in smaller or close-knit groups.
  • Opinions vs. Facts: Participants preferred flagging over deliberation, citing concerns that deliberation primarily surfaces opinions rather than verifiable facts.
  • Echo Chambers: The effectiveness of deliberation in entrenched echo chambers was questioned.

Discussion and Implications

The paper's findings underscore the intricate balance required in designing a deliberation system for WhatsApp. Ensuring anonymity, minimizing user workload, and maintaining group harmony emerged as key design challenges. The authors argue for a procedural approach to deliberation, focusing on the deliberation process itself rather than its outcomes. This aligns with modern deliberative theory, which emphasizes rational discourse and critical reflection over consensus.

Comparison with Moderation and Fact-Checking

Deliberation, as proposed in this paper, complements existing content moderation and fact-checking strategies. While it may not replace fact-checking, it can help identify content that merits professional scrutiny, especially hyperlocal or multilingual misinformation. Moreover, community-driven deliberation can extend the reach and impact of fact-checks, leveraging local knowledge and fostering collective accountability.

Conclusion

This paper advances the field of CSCW by shedding light on the practical and theoretical implications of using conversational agents to facilitate deliberation on harmful content in private group chats. It provides a nuanced understanding of the challenges and opportunities in this space, offering concrete design recommendations and paving the way for future research in AI-assisted deliberative systems. The integration of such agents holds promise not only for improving content quality on platforms like WhatsApp but also for fostering a more informed and reflective digital public sphere.

X Twitter Logo Streamline Icon: https://streamlinehq.com