Content Moderation

12 Posts

Algorithm Investigators: All about the EU's new Centre for Algorithmic Transparency
Content Moderation

Algorithm Investigators: All about the EU's new Centre for Algorithmic Transparency

A new regulatory body created by the European Union promises to peer inside the black boxes that drive social media recommendations. The European Centre for Algorithmic Transparency (ECAT) will study the algorithms that identify, categorize...
Phone showing different features of an app
Content Moderation

Better Pay for Data Workers: Google contractors get a raise.

Contract workers who help train the algorithms behind Google Search won a pay raise. Employees of U.S. contractors who evaluate the quality of Google Search’s results, knowledge panels, and ads will earn $15 per hour, a raise of roughly $1.
Metaverse illustration with Meta AI product names
Content Moderation

Meta Decentralizes AI Effort: Meta Restructures its AI Research Teams

The future of Big AI may lie with product-development teams. Meta reorganized its AI division. Henceforth, AI teams will report to departments that develop key products.
Matt Zeiler
Content Moderation

Matt Zeiler: Advance AI for good

There’s a reason why artificial intelligence is sometimes referred to as “software 2.0”: It represents the most significant technological advance in decades. Like any groundbreaking invention, it raises concerns about the future, and much of the media focus is on the threats it brings.
Illustration of a woman riding a sled
Content Moderation

Multimodal AI Takes Off: Multimodal Models, such as CLIP and DALL·E, are taking over AI.

While models like GPT-3 and EfficientNet, which work on text and images respectively, are responsible for some of deep learning’s highest-profile successes, approaches that find relationships between text and images made impressive
Demostration of Suspicious User Control tool in the Stream Chat section (Twitch)
Content Moderation

Troll Recognition: Twitch uses AI to flag trolls who try to avoid bans.

A prominent online streaming service is using a machine learning model to identify trolls who try to get around being banned.
Voice recognition tool "Bleep" working
Content Moderation

Haters Gonna [Mute]: Gamers can mute offensive language with AI.

A new tool aims to let video gamers control how much vitriol they receive from fellow players. Intel announced a voice recognition tool called Bleep that the company claims can moderate voice chat automatically, allowing users to silence offensive language.
Facebook like and dislike buttons
Content Moderation

Social Engagement vs. Social Good: The builder of Facebook's algorithm talks bias.

Facebook’s management obstructed the architect of its recommendation algorithms from mitigating their negative social impact. The social network focused on reining in algorithmic bias against particular groups of users at the expense of efforts to reduce disinformation.
Person in wheelchair, person in side profile, person wearing a hoodie
Content Moderation

Human Disabilities Baffle Algorithms: Facebook blocked ads aimed at people with disabilities.

Facebook’s content moderation algorithms block many advertisements aimed at disabled people. The social media platform’s automated systems regularly reject ads for clothing designed for people with physical disabilities.
Forbidden sign over different potentially dangerous falsehood symbols
Content Moderation

YouTube vs. Conspiracy Theorists: How YouTube uses AI to spot conspiracies

Facing a tsunami of user-generated disinformation, YouTube is scrambling to stop its recommendation algorithm from promoting videos that spread potentially dangerous falsehoods.
Illustration of a broken heart with a smirk in the middle
Content Moderation

Outing Hidden Hatred: How Facebook built a hate speech detector

Facebook uses automated systems to block hate speech, but hateful posts can slip through when seemingly benign words and pictures combine to create a nasty message. The social network is tackling this problem by enhancing AI’s ability to recognize context.
Angry emoji over dozens of Facebook like buttons
Content Moderation

Facebook Likes Extreme Content: Facebook execs rejected changes to reduce polarization.

Facebook’s leadership has thwarted changes in its algorithms aimed at making the site less polarizing, according to the Wall Street Journal. The social network’s own researchers determined that its AI software promotes divisive content.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox