Fairness across Network Positions in Cyberbullying Detection Algorithms (1905.03403v1)
Abstract: Cyberbullying, which often has a deeply negative impact on the victim, has grown as a serious issue in Online Social Networks. Recently, researchers have created automated machine learning algorithms to detect Cyberbullying using social and textual features. However, the very algorithms that are intended to fight off one threat (cyberbullying) may inadvertently be falling prey to another important threat (bias of the automatic detection algorithms). This is exacerbated by the fact that while the current literature on algorithmic fairness has multiple empirical results, metrics, and algorithms for countering bias across immediately observable demographic characteristics (e.g. age, race, gender), there have been no efforts at empirically quantifying the variation in algorithmic performance based on the network role or position of individuals. We audit an existing cyberbullying algorithm using Twitter data for disparity in detection performance based on the network centrality of the potential victim and then demonstrate how this disparity can be countered using an Equalized Odds post-processing technique. The results pave way for more accurate and fair cyberbullying detection algorithms.
- Vivek Singh (37 papers)
- Connor Hofenbitzer (1 paper)