Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Learning for Detecting Cyberbullying Across Multiple Social Media Platforms (1801.06482v1)

Published 19 Jan 2018 in cs.IR, cs.CL, and cs.SI

Abstract: Harassment by cyberbullies is a significant phenomenon on the social media. Existing works for cyberbullying detection have at least one of the following three bottlenecks. First, they target only one particular social media platform (SMP). Second, they address just one topic of cyberbullying. Third, they rely on carefully handcrafted features of the data. We show that deep learning based models can overcome all three bottlenecks. Knowledge learned by these models on one dataset can be transferred to other datasets. We performed extensive experiments using three real-world datasets: Formspring (12k posts), Twitter (16k posts), and Wikipedia(100k posts). Our experiments provide several useful insights about cyberbullying detection. To the best of our knowledge, this is the first work that systematically analyzes cyberbullying detection on various topics across multiple SMPs using deep learning based models and transfer learning.

Citations (295)

Summary

  • The paper demonstrates that deep learning models outperform traditional methods in cyberbullying detection by leveraging transfer learning.
  • The study employs CNN, LSTM, and attention-based BLSTM architectures to capture semantic nuances without relying on handcrafted features.
  • Transfer learning across diverse datasets enhances detection accuracy despite challenges like class imbalance and platform-specific language variation.

Deep Learning for Detecting Cyberbullying Across Multiple Social Media Platforms

The paper "Deep Learning for Detecting Cyberbullying Across Multiple Social Media Platforms" by Sweta Agrawal and Amit Awekar addresses significant challenges in the detection of cyberbullying in social media environments using deep learning techniques. Traditional approaches in this domain have been limited by their reliance on a single social media platform, focus on specific types of cyberbullying (e.g., racism or sexism), and dependence on handcrafted features such as swear word lists. This research introduces deep learning models that significantly mitigate these limitations, leveraging the capabilities of transfer learning to enhance performance across diverse datasets.

Methodology and Datasets

The authors utilized deep learning methodologies, specifically Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) networks, Bidirectional Long Short-Term Memory (BLSTM) networks, and BLSTM networks with attention mechanisms. These models are known for their ability to learn complex feature representations without explicit feature engineering, making them particularly well-suited for tasks like cyberbullying detection, where contextual and semantic nuances are critical.

The research leverages transfer learning, illustrating that models trained on one dataset can effectively adapt to others, thus confirming that a learned understanding of cyberbullying is transferable across social media platforms. The authors experimented using datasets from Formspring, Twitter, and Wikipedia, covering over 128,000 annotated posts to evaluate cyberbullying detection on topics such as personal attacks, racism, and sexism.

Key Findings and Insights

  1. Performance of Deep Learning Models: The paper concludes that deep learning models surpass traditional machine learning in detecting cyberbullying. The absence of reliance on handcrafted features marked a clear advancement, especially considering the variability in cyberbullying across platforms.
  2. Role of Transfer Learning: The introduction of transfer learning demonstrated a substantial increase in detection capability, notably through the transfer of learned word embeddings, which capture cyberbullying-specific semantics across datasets. Transfer learning facilitated higher precision and recall compared to models trained in isolation on a single dataset.
  3. Class Imbalance: The datasets exhibited a significant class imbalance, where non-cyberbullying instances vastly outnumbered bullying instances. Techniques such as oversampling were employed to counteract this imbalance, which subsequently improved model outcomes.
  4. Platform-specific Variability: The work highlights the variation in cyberbullying manifestations across different social media platforms. For instance, words like "fat" and "slave" showed platform-specific semantic associations that underline the contextual complexity handled by learned embeddings.

Implications for Future Research

The demonstrated efficacy of deep learning in this context suggests several directions for future research and practical application:

  • Integration with Additional Data: Enriching these models with supplementary data like user profiles and interaction graphs could enhance the context for more nuanced detection. This is particularly relevant for identifying different forms and severity levels of cyberbullying.
  • Real-time Applications: Considering the promising results, developing systems for real-time monitoring and intervention in social media could become feasible, potentially informing policy decisions related to cyber safety.
  • Cross-Domain Adaptability: Extending similar model architectures and methodologies to parallel domains (e.g., misinformation or hate speech detection) could test the adaptability and generalizability of such approaches.
  • Exploration of New Architectures: As the field evolves, exploring novel neural architectures or enhancement mechanisms like transformers may further improve model performance and reduce computational costs.

In conclusion, the paper decisively highlights deep learning and transfer learning as potent tools in the evolving domain of cyberbullying detection. It lays a foundational methodology for cross-platform adaptability, potentially transforming how social media platforms address harmful interactions going forward.