Center for Informed Democracy and Social-cybersecurity Seminar

  • Remote Access - Zoom
  • Virtual Presentation
Seminars

"Yeah well, that’s just your opinion man”: The fallouts of abusive language detection

As abusive language detection systems are deployed more widely, it is necessary to begin to critically examine how they are defined, understood, and practiced. This talk approaches the question of who content moderation systems serve by first giving a brief overview of the datasets and technologies that are applied to the problem of online abuse, asking what logics notions of “toxicity” invoke, and how such logics permeate through the machine learning pipeline; continuing this line of reasoning, the talk examines how notions of objectivity are constructed into the machine learning pipeline, and how they influence which populations, and why those, are under additional scrutiny by abusive language detection systems. Finally, the talk seeks to lay out future directions for research into content moderation systems and how they can address, rather than propagate marginalization.

Zeerak Waseem is a Ph.D. Candidate at the University of Sheffield who is working computational approaches to online abuse detection and ethics in machine learning and natural language processing.

REGISTER

For More Information, Please Contact: 
Keywords: