A new tool aims to let video gamers control how much vitriol they receive from fellow players.

What’s new: Intel announced a voice recognition tool called Bleep that the company claims can moderate voice chat automatically, allowing users to silence offensive language. The system is in beta-test and scheduled for release later this year.

How it works: Chip maker Intel worked with Spirit AI, which develops NLP technology for content moderation, to let users of voice chat fine-tune how much of specific types of offensive language can reach their ears.

  • Bleep combines speech detection technology with Spirit’s flagship product, which determines whether a phrase constitutes harassment in the context of surrounding chatter.
  • The system classifies offensive speech in nine categories including misogyny, sexually explicit language, and anti-LGBTQ hate speech. Users can opt to filter out none, some, most, or all content in any category. For a tenth category called N-word, the system offers an on/off switch.
  • It runs on Windows PCs and, since it interacts directly with Windows’ audio controls, it can work with a variety of voice-chat apps.

Behind the news: ToxMod also aims to moderate video game voice chat and provides a dashboard for human moderators to track offensive speech across servers. Hive’s system is designed to moderate audio, video, text, and images. Its customers include Chatroulette, which uses Hive’s technology to help users avoid unwanted nudity. Two-Hat’s text-moderation system detects efforts to subvert moderation by, say, intentionally misspelling slurs and other potentially offensive language.

Why it matters: There’s a clear need for tools that help people enjoy networked communications without being targeted by abuse. Twenty-two percent of U.S. online gamers stopped playing certain games after experiencing verbal harassment, according to a survey by the Anti-Defamation League.

We’re thinking: For those whose first thought is, “Censorship!,” note that users will control this auto-moderation capability locally. At the same time, there’s a fine line between blocking harassment and shutting out perspectives we don't currently share. In an ideal world, players would take it upon themselves to keep their conversations civil. Until that day comes, AI will play a valid — if worrisome at times — role.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox