Why Moderators Can’t Protect Online Communities on Their Own

Olivier Sibai*, Marius K. Luedicke, Kristine De Valck

*Corresponding author for this work

Publication: Popular science articles (i.e. magazines)Popular science article

Abstract

The data on online abuse is sobering: Nearly one in three teens have been cyberbullied and one in five women have experienced misogynistic abuse online. Overall, some 40% of all internet users have faced some form of online harassment. Why have online communities failed so dramatically to protect their users? An analysis of 18 years of data on user behavior and its moderation reveals that the failure stems from the fact that people responsible for moderating online behavior labor under five misconceptions about toxicity, specifically that people experiencing abuse will leave, that the incidence of abuse are isolated and independent, that abuse is not an inherent part of community culture, that rivalries in communities are beneficial, and that self-moderation can and does prevent abuse. These misconceptions drive current moderation practices. In each case, the authors present findings that both debunk the myths and point to more effective ways of managing toxic online behavior.
Original languageEnglish
Volume(online publication)
Specialist publicationHarvard Business Review
Publication statusPublished - 6 Nov 2024

Keywords

  • online communities
  • conflict
  • online violence
  • toxicity

Cite this