Dear friends,

What should be AI’s role in moderating the millions of messages posted on social media every day? The volume of messages means that automation is required. But the question of what is appropriate moderation versus inappropriate censorship lingers.

AI is helpful for scaling up a moderation policy. But it doesn’t address the core challenge of defining a policy: Which expressions to permit and which to block. This is hard for both humans and AI.

Deciding what to block is hard because natural language is ambiguous.

  • “Don’t let them get away with this” could be an incitement to violence or a call for justice.
  • “The vaccine has dangerous side effects” could be a scientific fact or misinformation.

The meanings of words vary from person to person. My son says “wawa” when he wants water, and only his close family (and now you!) understand. At work, teams invent acronyms that others don’t understand. More problematically, lawbreakers and hate groups develop code words to discuss their activities.

If humans understand the same words differently, how can we train an AI to make such distinctions? If a piece of text has no fixed meaning, then enforcing policies based on the text is difficult. Should we hide it from user A if they would read it as promoting violence, but show it to user B if they would view it as benign? Or should hiding a message be based on the intent of the sender? None of these options is satisfying.

A red Facebook dislike button surrounded by dozens of Facebook like buttons

Further, getting the data to build an AI system to accomplish any of this is hard. How can the developers who gather the data understand its full range of meanings? Different communities have their own interpretations, making it impossible to keep track.

Even if the meaning are unambiguous, making the right decision is still hard. Fortunately, social media platforms can choose from a menu of options depending on how egregious a message is and the degree of confidence that it’s problematic. Choices include showing it to a smaller audience, adding a warning label, and suspending, temporarily or permanently, the user who posted it. Having a range of potential consequences helps social media platforms manage the tradeoff between silencing and protecting users (and society).

Despite their flaws, AI systems make social media better. Imagine email without AI-driven spam filtering; it would rapidly become unusable. Similarly, AI is critical for eliminating the most spammy or toxic social media messages. But the challenge of moderating any given message transcends AI.

It’s important to acknowledge this challenge openly, so we can debate the principles we would apply to this problem and recognize that there may be no perfect solution. Through transparent and robust debate, I believe that we can build trust around content moderation and make tradeoffs that maximize social media’s benefit.

Keep learning!

Andrew

DeepLearning.AI Exclusive

Andrew Ng's 'How to Build Your Career in AI' eBook cover

Free eBook: Build Your Career in AI

How do you build an AI resume without job experience? Prepare for an interview? Overcome imposter syndrome? This new eBook collects advice for job-seekers from Andrew Ng. Get your free copy here


News

Video showing Apple's self-driving car

Drive Different

Apple is redrawing the road map for its self-driving car.

What's new: The company is redesigning an autonomous car that has been in development for nearly a decade, Bloomberg reported. Originally intended to be fully autonomous under all conditions, the redesigned vehicle will allow for a human driver.

Downshift: Apple had scheduled the vehicle, code named Titan, for 2025, anonymous insiders said. However, executives realized earlier this year that they couldn’t meet the deadline and decided to scale back the autonomous features. The new timeline calls for a prototype by 2024, testing through 2025, and launch in 2026. The target price is under $100,000, a markdown from the original $120,000. The company is currently testing its semi-autonomous system on Lexus SUVs in several U.S. states.

  • The original design called for an interior in which all the seats faced the center, without a steering wheel or pedals. The new design will include human controls.
  • The revamped car will drive autonomously only on highways, allowing drivers to watch movies and play video games. It will alert them when manual control is required to negotiate surface streets or bad weather.
  • The self-driving system navigates using data from lidar, radar, and cameras. An onboard processor nicknamed Denali executes some tasks while Amazon Web Services handles others in the cloud.
  • Remote operators may take over control of vehicles during emergencies.

Behind the news: Fully self-driving cars on the open road remain limited to a few robotaxi deployments in China and the United States. Meanwhile, the industry has suffered a series of setbacks. Ford shut down Argo, its joint project with Volkswagen. Tesla’s purported Full Self-Driving option requires a human in the loop. Further development is required to enable such vehicles to drive safely despite challenges like road construction and snow.

Why it matters: Commercializing fully autonomous vehicles is a tantalizing but elusive goal. Apple’s decision to downshift for the sake of bringing a product to market suggests that human drivers will sit behind the wheel for the foreseeable future.

We're thinking: Full self-driving cars have been five years away for the past decade. The challenge of handling the long tail of rare but critical events has been a persistent issue. Upcoming developments such as foundation models for computer vision are likely to make a substantial difference. We don't know when, but we're confident that the future includes full autonomy.


Sequence showing how FIFA's Video Assisted Review (VAR) works

The World Cup’s AI Referee

The outcome of the FIFA World Cup 2022 depends on learning algorithms.

What's new: The quadrennial championship tournament of football (known as soccer in the United States), which wraps up this week, is using machine learning to help human arbiters spot players who break a rule that governs their locations on the field.

How it works: The off-side rule requires that, when receiving a pass, members of the team that possesses the ball keep two opposing players between them and their opponents’ goal. Referees often call off-side erroneously depending on their vantage point on the field. FIFA introduced a Video Assisted Review system in 2018. The machine learning capabilities help human assistants in a remote video center identify violations.

  • The ball contains sensors that track its location and motion. The sensors send data to the remote facility 500 times per second.
  • Twelve cameras installed under a stadium’s roof capture gameplay from various angles. They transmit data 50 times per second.
  • In the remote facility, a computer vision system combines the data streams to track each player’s location and pose. It watches for off-side violations and alerts human officials when an offside player touches the ball.
  • Officials validate alerts manually. After review, the system generates a 3D animation of the event from multiple perspectives, which is broadcast on screens around the stadium and live feeds of the match.

Behind the news: AI is watching activity off the pitch as well. Qatari authorities use face recognition to monitor fans for unruly behavior. Authorities also use computer vision to track crowd size and movement to prevent the violence and crowd crushes that have marred recent matches.

Controversy: The semi-automated offside detection system has been criticized by players who say its role in referee decisions is unclear.

Why it matters: Players and fans alike expect referees to be both objective and omnipresent — which is, of course, impossible for anyone to accomplish. AI isn’t a perfect substitute, but it allows officials to observe the action at an unprecedented level of detail.

We're thinking: If FIFA hasn’t come up with a name for the system, we humbly suggest: Football Net.


A MESSAGE FROM DEEPLEARNING.AI

New Specialization 'Mathematics for Machine Learning and Data Science' banner ad

Announcing our newest specialization! Mathematics for Machine Learning and Data Science is carefully designed to help you understand the fundamentals behind common algorithms and data analysis techniques. Scheduled to launch in January 2023! Join the waitlist


Sequence of cellphones showing a photograph turned into art on Lensa AI app

Avatars Gone Wild

A blockbuster app produces sexualized avatar images, even when the original portraits were safe for work.

What's new: Lensa AI, a photo editor that turns face photos into artistic avatars, sometimes generates sexualized images from plain selfies, according to several independent reports. It can also be manipulated to produce more explicit imagery, raising concerns that it may be used to victimize people by generating lewd images of their likeness.

How it works: Users upload 10 to 20 photos and choose a gender. The app uses the open source Stable Diffusion image generator to produce images in various art styles including fantasy, comic-book, and faux-3D rendering. Users must buy a $36 annual subscription to use the image generator, which costs an additional $3.99 for 50 images, $5.99 for 100, or $7.99 for 200. The terms of service disallow nudes and photos of minors, and the app requests that users verify that they are adults.

NSFW: Journalists conducted tests after hearing complaints from users.

  • A reporter for MIT Technology Review, who is Asian and female, generated 100 avatars. Sixteen of them were topless and another 14 dressed her in revealing outfits. The app produced fewer sexualized images of white women, and fewer still when she used male content filters.
  • A Wired reporter, who is female, uploaded images of herself at academic conferences, and the app produced nude images. When she uploaded childhood face portraits of herself, it produced depictions of her younger self in sexualized poses.
  • A TechCrunch reporter uploaded two sets of images. One contained 15 non-sexual photos of a well-known actor. The other included the same 15 photos plus five in which the actor’s face had been edited onto topless female images. The first set generated benign outputs. Of the second set, 11 out of 100 generated images depicted a topless female.

Behind the news: Image generators based on neural networks have churned out nonconsensual nude depictions of real people at least since 2017. Open-source and free-to-use models have made it easier for the general public to create such images. In November, Stability AI, developer of Stable Diffusion, released a version trained on a dataset from which sexual images had been removed.

Why it matters: Text-to-image generators have hit the mainstream: Lensa was the Apple Store’s top download last week, and three similar apps were in the top 10. People who fear deepfakes now have cause for a once-hypothetical concern: Anybody who has access to photos of another person could hijack their images.

We're thinking: Image generation has widespread appeal and it’s easy to use. That’s no excuse for misusing it to degrade or harass people. Creating or sharing a nude depiction of someone without their permission is never okay.


Diagram explaining Atlas, a retrieval-augmented language model that exhibits strong few-shot performance on knowledge tasks

Memorize Less, Retrieve More

Large language models are trained only to predict the next word based on previous ones. Yet, given a modest fine-tuning set, they acquire enough information to learn how to perform tasks such as answering questions. New research shows how smaller models, too, can perform specialized tasks relatively well after fine-tuning on only a handful of examples.

What’s new: Atlas is a language model of modest size that fulfills prompts by referring to external documents. Gautier Izacard and Patrick Lewis led the project with colleagues at Meta, École Normale Supérieure, Paris Sciences et Lettres, Inria, and University College London.

Key insight: A large language model uses its huge complement of parameters to memorize information contained in its pretraining and fine-tuning datasets. It wouldn’t need to memorize so much — and thus wouldn’t need so many parameters — if it had access to documents on demand.

How it works: Atlas comprises a retriever that’s pretrained to fetch relevant documents from Wikipedia and Common Crawl, and a language model that uses the documents in those datasets to respond to prompts. The authors fine-tuned the system to complete tasks including answering open-ended questions in KILT and multiple choice questions in MMLU.

  • The retriever includes two transformers. One learned to produce an embedding of a prompt (when fine-tuning for, say, answering questions, it learned to produce an embedding of a question). The other learned to produce an embedding of a document, which was stored.
  • The language model, an encoder-decoder that produces its own embedding of the document, was trained by having it fill in missing words in Wikipedia and Common Crawl.
  • The authors further trained the retriever and language model on a similar task (but different loss functions). The language model, given new text with missing words and its own document embeddings, learned to fill in the missing words. The retriever, given the text with missing words, learned to identify documents that contain similar text. The retriever’s loss function encouraged it to rate documents as more similar to the prompt if the language model was more confident in the text it generated using those documents.
  • Given a prompt, the retriever compared it to its stored document embeddings and selected the 20 most relevant documents. Then, given the prompt and embeddings, the language model generated the output.

Results: MMLU offers four possible answers to each question, so random chance is 25 percent. Fine-tuned on five examples in MMLU, Atlas (11 billion parameters) achieved 47.9 percent average accuracy, while GPT-3 (175 billion parameters) achieved 43.9 percent average accuracy. (Atlas didn’t beat the 70-billion parameter Chinchilla, which achieved 67.5 average accuracy.) Fine-tuned on all MMLU training examples, Atlas achieved 66 percent average accuracy, while GPT-3 achieved 53.9 percent average accuracy. The questions in KILT’s Natural Questions subset are open-ended, so accuracy measures the percentage of outputs that exactly matched ground truth. Fine-tuned on 64 Natural Questions examples, Atlas achieved 42.4 percent accuracy, while next-best PaLM (540 billion parameters) achieved 39.6 percent accuracy. Fine-tuned on all Natural Questions training examples, Atlas achieved 60.4 percent accuracy, while the previous state of the art R2-D2 (1.3 billion parameters) achieved 55.9 percent accuracy.

Why it matters: Training smaller models consumes less energy and costs less. Shifting the knowledge memorized by the model from the parameters into an external database not only reduces the number of necessary parameters, but also makes the model’s knowledge easier to update. Instead of retraining the model, you can simply extend the document database by feeding new data to the models and storing the resulting document embeddings.

We’re thinking: Augmenting a language model’s training with retrieved documents is a promising avenue of research. RETRO did something similar, but it wasn’t fine-tuned on particular tasks, much less on a handful of examples. Similarly, researchers at Meta built a chatbot that used documents found on the web to generate more realistic conversations.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox