Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Detecting LGBTQ+ Instances of Cyberbullying (2409.12263v1)

Published 18 Sep 2024 in cs.LG and cs.SI

Abstract: Social media continues to have an impact on the trajectory of humanity. However, its introduction has also weaponized keyboards, allowing the abusive language normally reserved for in-person bullying to jump onto the screen, i.e., cyberbullying. Cyberbullying poses a significant threat to adolescents globally, affecting the mental health and well-being of many. A group that is particularly at risk is the LGBTQ+ community, as researchers have uncovered a strong correlation between identifying as LGBTQ+ and suffering from greater online harassment. Therefore, it is critical to develop machine learning models that can accurately discern cyberbullying incidents as they happen to LGBTQ+ members. The aim of this study is to compare the efficacy of several transformer models in identifying cyberbullying targeting LGBTQ+ individuals. We seek to determine the relative merits and demerits of these existing methods in addressing complex and subtle kinds of cyberbullying by assessing their effectiveness with real social media data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Muhammad Arslan (9 papers)
  2. Manuel Sandoval Madrigal (1 paper)
  3. Mohammed Abuhamad (14 papers)
  4. Deborah L. Hall (4 papers)
  5. Yasin N. Silva (6 papers)

Summary

Detecting LGBTQ+ Instances of Cyberbullying

The paper "Detecting LGBTQ+ Instances of Cyberbullying" by Muhammad Arslan et al. makes significant contributions to the domain of cyberbullying detection with a focus on LGBTQ+ individuals. This work is situated in the broader context of developing machine learning models that can identify abusive language and harassment on social media platforms. With the LGBTQ+ community being disproportionately targeted by online harassment, the paper aims to examine the effectiveness of advanced transformer models - specifically, RoBERTa, BERT, and GPT-2 - in identifying cyberbullying instances pertinent to this community.

Introduction

Cyberbullying represents a pivotal issue for adolescents globally, exacerbating risks such as mental health challenges and even suicidality. For the LGBTQ+ community, these risks are magnified due to chronic stressors and systemic disparities. Hence, there is a compelling need to develop precise detection models that can handle the unique characteristics of bullying directed at LGBTQ+ individuals. While general cyberbullying detection has been extensively studied, models sensitive to the nuanced and context-specific nature of LGBTQ+ harassment remain underdeveloped.

The paper directs its efforts towards closing this gap by evaluating the efficacy of various transformer-based LLMs on an Instagram dataset curated for this purpose. The primary objective is to determine how well these models can discern LGBTQ+ cyberbullying, taking into account the complex and often subtle forms of harassment that exist.

Methodology

The researchers framed the problem as a binary classification task where each comment in the Instagram dataset is labeled as either LGBTQ+-related cyberbullying or non-cyberbullying. This involves training a classifier ff to map any given comment pp to the appropriate label yy.

The dataset encompasses 1,083 annotated Instagram comments with 217 targeting LGBTQ+ individuals. Preprocessing steps included addressing missing values and using stratified k-fold cross-validation to ensure robust evaluation. The authors employed three pre-trained models: RoBERTa, BERT, and GPT-2, and analyzed their performance across different configurations, including the use of oversampling techniques like SMOTE and ADASYN to address class imbalances.

Experimental Results

The evaluation metrics considered in the paper include accuracy, precision, recall, F1 score, and Area Under the Receiver Operating Characteristic curve (AUROC). These metrics enable a detailed assessment of each model's strengths and weaknesses. Table \ref{tab:results} encapsulates the performance of the models in different configurations.

  • RoBERTa emerged as the top performer with an accuracy of 0.9456 and an F1 score of 0.733 in its best configuration. It showed robustness across various metrics, indicating its superior capability in discerning cyberbullying comments.
  • BERT and GPT-2 displayed lower performance. Particularly, BERT struggled with both precision and recall, highlighting its challenges in capturing the specific nature of LGBTQ+ bullying.
  • The impact of oversampling techniques (SMOTE and ADASYN) was evident, as they generally led to improvements in recall for LGBTQ+ bullying comments, though the issue of false negatives persisted.

Discussion

Despite the promising results, several challenges remain. Misclassification of LGBTQ+ cyberbullying instances often stemmed from the model's inability to capture context-dependent and implicit abusive language. False negatives and false positives were significantly influenced by the nuanced nature of such comments, underscoring the need for more sophisticated context-aware models.

To further improve detection capabilities, future research could explore several avenues:

  • Integrating multi-modal data: Leveraging images, videos, and network metrics like likes and shares could enrich context and improve model accuracy.
  • Developing richer datasets: Expanding the dataset to include more diverse scenarios and types of bullying could aid in training more robust models.
  • Advanced contextual understanding: Techniques that better capture the sequential and temporal aspects of conversations might enhance detection of subtle bullying cues.
  • Bias mitigation: Ensuring fairness in model training to reduce biases against particular groups within the dataset remains a critical area.

Conclusion

This paper contributes to the ongoing efforts to create more inclusive and fair cyberbullying detection tools. The paper demonstrates that while transformer models like RoBERTa outperform others in this task, there are inherent challenges in detecting nuanced and context-specific harassment. This calls for further research into model improvements and better dataset curation. Addressing these aspects will be essential for developing robust systems capable of fostering safer online environments, particularly for vulnerable groups like the LGBTQ+ community.

The paper’s integration of transformer models within this specialized domain underscores the importance of targeted machine learning applications to address specific social issues. By enhancing the capabilities of these models and expanding the datasets, future efforts can significantly improve the digital experience for marginalized communities.

X Twitter Logo Streamline Icon: https://streamlinehq.com