Content Moderation

10 Posts

Metaverse illustration with Meta AI product names
Content Moderation

Meta Decentralizes AI Effort: Meta Restructures its AI Research Teams

The future of Big AI may lie with product-development teams. Meta reorganized its AI division. Henceforth, AI teams will report to departments that develop key products.
Matt Zeiler
Content Moderation

Matt Zeiler: Advance AI for good.

There’s a reason why artificial intelligence is sometimes referred to as “software 2.0”: It represents the most significant technological advance in decades. Like any groundbreaking invention, it raises concerns about the future, and much of the media focus is on the threats it brings.
Illustration of a woman riding a sled
Content Moderation

Multimodal AI Takes Off: Multimodal Models, such as CLIP and DALL-E, are taking over AI.

While models like GPT-3 and EfficientNet, which work on text and images respectively, are responsible for some of deep learning’s highest-profile successes, approaches that find relationships between text and images made impressive
Demostration of Suspicious User Control tool in the Stream Chat section (Twitch)
Content Moderation

Troll Recognition: Twitch uses AI to flag trolls who try to avoid bans.

A prominent online streaming service is using a machine learning model to identify trolls who try to get around being banned.
Voice recognition tool "Bleep" working
Content Moderation

Haters Gonna [Mute]: Intel NLP Allows Voice Chatters to Silence Hate Speech

A new tool aims to let video gamers control how much vitriol they receive from fellow players. Intel announced a voice recognition tool called Bleep that the company claims can moderate voice chat automatically, allowing users to silence offensive language.
Facebook like and dislike buttons
Content Moderation

Social Engagement vs. Social Good

Facebook’s management obstructed the architect of its recommendation algorithms from mitigating their negative social impact. The social network focused on reining in algorithmic bias against particular groups of users at the expense of efforts to reduce disinformation.
Person in wheelchair, person in side profile, person wearing a hoodie
Content Moderation

Human Disabilities Baffle Algorithms

Facebook’s content moderation algorithms block many advertisements aimed at disabled people. The social media platform’s automated systems regularly reject ads for clothing designed for people with physical disabilities.
Forbidden sign over different potentially dangerous falsehood symbols
Content Moderation

YouTube vs. Conspiracy Theorists

Facing a tsunami of user-generated disinformation, YouTube is scrambling to stop its recommendation algorithm from promoting videos that spread potentially dangerous falsehoods.
Illustration of a broken heart with a smirk in the middle
Content Moderation

Outing Hidden Hatred: Facebook uses NLP and Vision AI to Detect Hateful Posts

Facebook uses automated systems to block hate speech, but hateful posts can slip through when seemingly benign words and pictures combine to create a nasty message. The social network is tackling this problem by enhancing AI’s ability to recognize context.
Angry emoji over dozens of Facebook like buttons
Content Moderation

Facebook Likes Extreme Content

Facebook’s leadership has thwarted changes in its algorithms aimed at making the site less polarizing, according to the Wall Street Journal. The social network’s own researchers determined that its AI software promotes divisive content.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox