Social Media Bots and the Amplification Effect Software bots that engage with certain social media posts likely play a role in manipulating public opinion. We need better algorithms to distinguish fake from real engagement.

Published
Reading time
2 min read
Robot using a megaphone to amplify its message, with smaller robots spreading out from the megaphone.

Dear friends,

Trump and the Republican party chalked up huge wins this week. Did manipulation of social media by generative AI play any role in this election? While many have worried about AI creating fake or misleading content that influences people, generative AI has probably not been the primary method of manipulation in this election cycle. Instead, I think a bigger impact might have been the “amplification effect” where software bots — which don’t have to rely heavily on generative AI — create fake engagement (such as likes/retweets/reshares), leading social media companies’ recommendation algorithms to amplify certain content to real users, some of whom promote it to their own followers. This is how fake engagement leads to real engagement.

This amplification effect is well known to computer security researchers. It is an interesting sign of our global anxiety about AI that people ascribe social media manipulation to AI becoming more powerful. But the problem here is not that AI is too powerful; rather, it is that AI is not powerful enough. Specifically, the issue is not that generative AI is so powerful that hostile foreign powers or unethical political operatives are successfully using it to create fake media that influences us; the problem is that some social media companies’ AI algorithms are not powerful enough to screen out fake engagement by software bots, and mistake it for real engagement by users. These bots (which don’t need to be very smart) fool the recommender algorithms into amplifying certain content.

The Washington Post reported that tweets on X/Twitter posted by Republicans were more viral than tweets from Democrats. Did this reflect the audience’s deeper engagement with Republican messages than Democratic ones, or have bots influenced this by boosting messages on either side? It is hard to know without access to Twitter’s internal data.

The bottleneck to disinformation is not creating it but disseminating it. It is easy to write text that proposes a certain view, but hard to get many people to read it. Rather than generating a novel message (or using deepfakes to generate a misleading image) and hoping it will go viral, it might be easier to find a message written by a real human that supports a point of view you want to spread, and use bots to amplify that.

I don’t know of any easy technical or legislative approach to combating bots. But it would be a good step to require transparency of social media platforms so we can better spot problems, if any. Everyone has a role to play in protecting democracy, and in tech, part of our duty will be to make sure social media platforms are fair and defend them against manipulation by those who seek to undermine democracy.

Democracy is one of humanity’s best inventions. Elections are an important mechanism for protecting human rights and supporting human flourishing. Following this election, we must continue to strenuously nourish democracy and make sure this gem of human civilization continues to thrive.

Keep learning!

Andrew

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox