Verify whether admins themselves propagate or unevenly suppress problematic content

Ascertain whether WhatsApp group admins share problematic content themselves, perceive hate speech or propaganda as harmless due to personal biases, treat offenders equitably, or use their power to suppress minority or opposing viewpoints, given that these behaviors could not be verified through self-reported interviews.

Background

Because the study relies on self-reported qualitative interviews, the authors acknowledge uncertainty about admins’ own behaviors and possible biases. This includes whether admins themselves circulate problematic content, whether they interpret harmful content as acceptable due to bias, and whether they exercise moderation power fairly or in ways that suppress minority or dissenting views. Establishing these facts is important for understanding fairness, accountability, and bias in end-to-end encrypted group moderation.

References

Given the self-reported nature of qualitative interviews, we could not verify if admins in our study shared problematic content themselves, perceived hate speech or propaganda as harmless due to their own biases, treated the offenders equally, or used their power to suppress minority or opposing views.

One Style Does Not Regulate All: Moderation Practices in Public and Private WhatsApp Groups  (2401.08091 - Shahid et al., 2024) in Section "Limitations and Conclusion"