Facebook's Architecture Undermines Vaccine Misinformation Removal Efforts (2202.02172v2)
Abstract: Misinformation promotes distrust in science, undermines public health, and may drive civil unrest. Vaccine misinformation, in particular, has stalled efforts to overcome the COVID-19 pandemic, prompting social media platforms' attempts to reduce it. Some have questioned whether "soft" content moderation remedies -- e.g., flagging and downranking misinformation -- were successful, suggesting that the addition of "hard" content remedies -- e.g., deplatforming and content bans -- is necessary. We therefore examined whether Facebook's vaccine misinformation content removal policies were effective. Here, we show that Facebook's policies reduced the number of anti-vaccine posts but also caused several perverse effects: pro-vaccine content was also removed, engagement with remaining anti-vaccine content repeatedly recovered to pre-policy levels, and this content became more misinformative, more politically polarised, and more likely to be seen in users' newsfeeds. We explain these results as an unintended consequence of Facebook's design goal: promoting community formation. Members of communities dedicated to vaccine refusal appear to seek out misinformation from multiple sources. Community administrators make use of several channels afforded by the Facebook platform to disseminate misinformation. Our findings suggest the need to address how social media platform architecture enables community formation and mobilisation around misinformative topics when managing the spread of online content.
- David A. Broniatowski (12 papers)
- Jiayan Gu (3 papers)
- Amelia M. Jamison (2 papers)
- Joseph R. Simons (1 paper)
- Lorien C. Abroms (1 paper)