Google updated a key model behind the algorithm that ranks its search results to respond to the flood of misinformation on the web.
What’s new: The search giant aims to minimize the prominence of falsehoods in the information it presents near the top of search results, which it calls snippets.
How it works: Google revised its Multitask Unified Model to verify the accuracy of snippets.
- The model evaluates how well the top results agree. It can compare pages on a given topic even if they use different phrases or examples.
- If the model doesn’t have high confidence in available sources, instead of a snippet, it generates an advisory such as, “It looks like there aren’t many great results for this search.”
- The model also recognizes misleading questions such as, “When did Snoopy assassinate Abraham Lincoln?” The update cuts inappropriate snippets in response to such queries by 40 percent.
Behind the news: Google isn’t the only major website to task AI with filtering the torrent of disinformation.
- Facebook uses multimodal learning to detect misinformation related to COVID-19.
- In 2020, YouTube deployed a classifier that downgraded recommendations for videos that contain conspiracy theories and anti-scientific misinformation.
Why it matters: Human fact-checkers can’t keep up with the rising tide of misinformation. AI has an imperfect record of moderating online content. For instance, Facebook faces allegations that its algorithms suppress ads aimed at people with disabilities and overlook incitements to violence. But even incremental improvements are worthwhile in the face of anti-vaccine panics, denial of climate change, and calls for genocide.
We’re thinking: AI is an important tool in keeping web searches honest, but it’s not yet ready to do the job alone. Just as autonomous taxi companies often employ human safety drivers to oversee their vehicles, automated content moderation systems benefit from humans in the loop.