Published
Reading time
2 min read
Demostration of Suspicious User Control tool in the Stream Chat section (Twitch)

A prominent online streaming service is using a machine learning model to identify trolls who try to get around being banned.

What’s new: Twitch, a crowdsourced streaming platform used primarily by video game enthusiasts, unveiled Suspicious User Detection. The new feature alerts when it recognizes a banned user who has logged in under a new name.
How it works: Twitch users deliver content through a channel, while the audience can watch, listen, and chat. Users who experience harassment can ban offenders from their channels. However, a ban doesn’t prevent aggressors from signing in under a new account and resuming the harassment.

  • The model behind Suspicious User Detection scans the platform for signals that may indicate aggression. When it spots them, it compares information about the offender, including chat behavior and account details, with that of banned accounts.
  • It classifies offenders as either possible or likely ban evaders. It blocks likely evaders from everyone except streamers and moderators, who can choose to ban them. It allows possible evaders to continue chatting, but it flags them to streamers and moderators so they can keep tabs and ban them if their activity warrants.
  • Suspicious User Detection is active by default, but streamers can disable it in their channels.

Behind the news: Trolls are inevitable on any online platform. Twitch isn’t the only one that uses machine learning to combat them.

  • Facebook’s hate speech detector in the fourth quarter of 2020 caught 49 percent of comments that contained harassment or bullying, including in non-English languages like Arabic and Spanish.
  • Built by Intel and Spirit AI, Bleep monitors voice chat. It uses speech recognition to classify offensive language into one of 10 categories and lets users choose how much of each category to filter out.
  • YouTube developed a model that recognizes titles, comments, and other signals associated with videos that spread conspiracy theories and disinformation. The model cut time spent watching such content by 70 percent across its platform.

Why it matters: Twitch is one of the world’s largest streaming platforms, but many of its contributors build their own anti-harassment tools in the face of what they feel is a lack of attention from the company. AI moderation tools can protect audience members looking to enjoy themselves, content creators aiming to deliver a great experience, and publishers who want to maximize valuable engagement metrics.

We’re thinking: Moderating online content is a game of cat and mouse but, as social media balloons, there simply aren’t enough paws to keep the vermin in check. AI tools can’t yet catch every instance of harassment, but they can extend the reach of human mods.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox