Dear friends,

Like many of you, I’m deeply saddened by the events of the past week. I’m horrified by the senseless violence perpetrated against Black communities and appalled by the persistent racial injustice of our society. It’s long past time to right these terrible wrongs.

The tragic deaths of George Floyd, Ahmaud Arbery, Breonna Taylor, Sean Reed, and innumerable others remind us that life is precious, and that we have much more work to do to build an inclusive society. Minority voices are often marginalized, and that creates a responsibility for the rest of us to keep our ears and minds open, and add our voices to theirs when the occasion calls.

Posters of George Floyd stuck on a glass door

The AI community itself has a diversity problem. The number of Black people in the field is vanishingly small. A narrow perspective can lead to severely flawed work if we overlook factors like skin color when we collect and annotate datasets or validate results. Without diverse teams, instead of building AI systems that help a cross section of people, we open doors for some while locking out others.

Lack of diversity in the AI community has another effect: It reinforces the belief, often unconscious, that certain people can’t make important contributions to the field. We need to fight this sort of bias as well.

If you are Black and working in AI, we would like to know about your experiences in the field. If you have Black colleagues whom you admire, please let us know about them as well. We hope to share some of your stories. Please write to us at hello@deeplearning.ai.

Maybe I’m naive, but the protests this time do feel different, and I’m cautiously optimistic that this may be the time when we finally make a huge dent in racism. As members of the AI community, let us join this movement, condemn racism everywhere we see it, and settle for nothing less than a fair and inclusive world.

Keep learning!

Andrew

News

Angry emoji over dozens of Facebook like buttons

Facebook Likes Extreme Content

Facebook’s leadership has thwarted changes in its algorithms aimed at making the site less polarizing, according to the Wall Street Journal.

What’s new: The social network’s own researchers determined that its AI software promotes divisive content. But the company’s management rejected or weakened proposed reforms, concerned that such changes might cut into profits or give the appearance of muzzling conservatives.

Fizzled reforms: Facebook’s recommender system promotes posts from its most active users: those who do the most commenting, sharing, and liking. Internal investigations conducted between 2016 and 2018 showed that such so-called superusers disproportionately spread misinformation, much of it politically divisive. Internal committees proposed ways to address the issue, but the company ultimately made changes that blunted their potential impact.

  • One proposal called for lowering recommendation scores for content posted by superusers on the far right or far left of the political spectrum. Content from moderates would receive higher scores.
  • The company accepted the approach but lowered the penalties applied to extremist posts by 80 percent.
  • Facebook also nixed the building of a classification system for polarizing content and quashed plans to suppress political clickbait.

Behind the news: Conservatives in the U.S. have long accused social media platforms of left-wing bias, a charge to which Facebook has been particularly sensitive.

  • In 2018, lawmakers grilled Facebook CEO Mark Zuckerberg over accusations that the platform marginalized conservatives.
  • Last week, Twitter put warning labels on tweets by Donald Trump that it deemed misleading or inciting violence. The president responded with an executive order that would strip social media companies of legal protections from liability for content posted by users.
  • Facebook publishes similarly inflammatory posts by the president without challenge. Some Facebook employees protested that stance with a virtual walkout on Monday.

Facebook’s response: “We’ve built a robust integrity team, strengthened our policies and practices to limit harmful content, and used research to understand our platform’s impact on society so we continue to improve,” the company said in a statement.

Why it matters: The algorithms that govern popular social media platforms have an outsized influence on political discourse worldwide, contributing to polarization, unrest, and hate crimes. Divisive rhetoric distributed by Facebook has been linked to violence in Sri Lanka, Myanmar, and India.

We’re thinking: Social media is a double-edged sword. It has been helpful for quickly disseminating (mostly accurate) information about concerns like Covid-19. But what brings people together can also drive them apart. The AI community has a responsibility to craft algorithms that support a just society even as they promote business.


Data and information related to shortcut learning

When Models Take Shortcuts

Neuroscientists once thought they could train rats to navigate mazes by color. It turns out that rats don’t perceive colors at all. Instead, they rely on the distinct odors of different colors of paint. New work finds that neural networks are especially prone to this sort of misalignment between training goals and learning.

What’s new: Robert Geirhos, Jörn-Henrik Jacobsen, and Claudios Michaelis led a study of neural network hiccups conducted by the University of Tübingen, Max Planck Research School for Intelligent Systems, and the University of Toronto. They argue that many of deep learning’s shortcomings reveal shortcut learning.

Key insight: Shortcuts are pathways to solving a problem that result in good performance on standard benchmarks but don’t require understanding of the problem and therefore don’t transfer well to real-world situations.

How it works: The authors identify apparent causes of shortcut learning in neural networks, circumstances that tend to encourage it, and techniques available to discourage it.

  • Dataset bias can cause models to focus on spurious correlations rather than valid relationships. For instance, cows often stand in pastures, so black, white, and green textures can indicate their presence — but a lawn is not an identifying mark of cattle. Models have a hard time learning true bovine characteristics when their training data offers this simpler approach.
  • Training data may be free of spurious correlations and still fail to represent the task at hand. For example, cats have fur while elephants have wrinkled skin, so an animal classifier may wind up becoming a texture detector instead.
  • To address such issues, the authors propose training and testing on out-of-distribution, augmented, and adversarial examples. If a model incorrectly recognizes a test sample that has been altered to change, say, the color of grass from green to brown, it’s likely the model relied on shortcuts.
  • In the animal classification tasks described above, domain experts can make sure the training set depicts animals in a variety of scenes and breeds such as hairless cats that exhibit a range of textures.

Why it matters: The authors shed light on an issue that has troubled machine learning engineers for decades and highlight the lack of robustness of current algorithms. Addressing these issues will be key to scaling up practical neural network deployments.

We’re thinking: Humans also use shortcuts; we’ve all memorized formulas by rote instead of fully understanding them. Our misbehaving models may be more like us than we’d like to admit.


Dishwashing robot working

AI Does the Dishes

A pioneer in dishwashing robots is reaching into commercial kitchens.

What’s new: Dishcraft Robotics uses machines equipped with computer vision to scrub dirties for corporate food services and, soon, restaurants.

How it works: Every morning, Dishcraft’s biodiesel-fueled trucks deliver clean dishes and utensils to corporate clients near its Silicon Valley hub. At the day’s end, the trucks retrieve them. Back at headquarters, workers load racks of dirty dishes and cutlery into an automated washing machine.

  • The system classifies each item and tailors its cleaning cycle accordingly, a company rep told The Batch.
  • A pose estimation model helps suction-powered robotic arms pass items between scrubbing and rinsing stations, as seen above.
  • Another model inspects items for cleanliness. The company says its sensors can detect particles too small for humans to see.
  • A recent $20 million investment will fund the company’s expansion into reusable takeout containers. Customers will drop off soiled plasticware at set locations, and the company will clean and redistribute it to its restaurant partners.

Behind the news: Other robotics companies are also aiming to disrupt the kitchen.

  • Last year, Toyota Research Institute showed off a mechanical prototype that organizes dishes and silverware in a household dishwasher.
  • Robotics startup Moley built a pair of AI-guided arms capable of cooking everything from soups to macarons. The company plans to release a consumer model this year.

Why it matters: Dishcraft estimates its system saves clients as much as 1.6 gallons of water per meal. Its plan to clean reusable to-go containers could keep tons of waste out of landfills.

We’re thinking: Such machines also could mean fewer bodies in food-service kitchens — a plus in the Covid era but not so much for human staff who may find themselves out of a job.


A MESSAGE FROM DEEPLEARNING.AI

Use machine learning to estimate treatment effects on individual patients. Take the final course of the AI For Medicine Specialization, now available on Coursera. Enroll now


Self-driving car from the inside

Cars Idled, AV Makers Keep Rolling

The pandemic has forced self-driving car companies off the road. Now they’re moving forward by refining their mountains of training data.

What’s new: Self-driving cars typically collect real-world training data with two human operators onboard, but Covid-19 makes that unsafe at any speed. Instead, several companies are squeezing more value out of work they’ve already done, according to MIT Technology Review.

What they’re doing: Makers of autonomous vehicles are relabeling old data and fine-tuning simulations.

  • Drivers at the autonomous truck company Embark are sifting through four years of past driving records, flagging noteworthy events and annotating how vehicles should react.
  • Pittsburgh-based Aurora Innovation reassigned vehicle operators to scan its data for unusual situations that can be converted into simulated training scenarios.
  • Scale AI, a data-labeling firm, is adding detail to its datasets. It’s also developing a tool that predicts the intentions of drivers and pedestrians by tracking their gaze.
  • GM’s Cruise is updating its simulations. For instance, the company is improving the way it scores vehicle responses to uncommon occurrences such as encounters with ambulances.

Behind the news: With little income, $1.6 million in average monthly overhead, and increasingly tight funding, autonomous vehicle companies are making tough choices. Lyft, Kodiak Robotics, and Ike have laid off employees, while Zoox is looking for a buyer.

Why it matters: Data can be a renewable resource: By adding new labels and sharpening old ones, AI teams can imbue old datasets with new life. Using refurbished datasets to improve simulations compounds the effect.

We’re thinking: Development of self-driving cars had moved into the slow lane even before the pandemic. It’s better to keep making incremental progress than none at all.


Data related to YOLOv4

Another Look at YOLO

The latest update of the acclaimed real-time object detector You Only Look Once is more accurate than ever.

What’s new: Alexey Bochovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao at Taiwan’s Institute of Information Science Academia Sinica offer YOLOv4 — the first version not to include the architecture’s original creators.

Key insight: Rapid inference is YOLO’s claim to fame. The authors prioritized newer techniques that improve accuracy without impinging on speed (their so-called “bag of freebies”). In addition, they made improvements that boost accuracy at a minimal cost to speed (the “bag of specials”). All told, these tweaks enable the new version to outperform both its predecessor and high-accuracy competitors running at real-time frame rates.

How it works: YOLO, as well as most object detectors since, tack a model that predicts bounding boxes and classes onto a pre-trained ImageNet feature extractor.

  • Techniques under the heading “bag of freebies” boost accuracy by adding computation during training. These include alternate bounding box loss functions, data augmentation, and decreasing the model’s confidence for ambiguous classes.
  • The authors introduce new data augmentation techniques such as Mosaic, which mixes elements drawn from four training images to place objects in novel contexts.
  • “Bag of specials” techniques include the choice of activation function: ReLU variants are marginally slower, but they can yield better accuracy.
  • The authors accommodate users with limited hardware resources by choosing techniques that allow training on a single, reasonably affordable GPU.

Results: The authors pitted YOLOv4 against other object detectors that process at least 30 frames per second, using the COCO image dataset. YOLOv4 achieved 0.435 average precision (AP), running at 62 frames per second (FPS). It achieved 0.41 AP at its maximum rate of 96 FPS. The previous state of the art, EfficientDet, achieved 0.43 AP running at nearly 42 FPS and 0.333 AP at its top speed of 62 FPS.

Why it matters: YOLOv4 locates and classifies objects faster than measurements of human performance. While it’s not as accurate as slower networks such as EfficientDet, the new version boosts accuracy without sacrificing speed.

We’re thinking: You only look once . . . twice . . . thrice . . . four times and counting!


Series of pictures of hotels and resorts located in African countries

Goodbye Tourists, Hello Labelers

Covid-19 has cost many workers their livelihood, but it has provided a lucky few on the lowest rungs of Africa’s machine learning industry with luxury suites.

What’s new: Samasource, a data labeling company headquartered in San Francisco, California, is housing its East African workforce in hotels and resorts so they can continue to work while maintaining social distance, Wired reports.

How it works: The pandemic prompted strict lockdowns in Kenya and Uganda, where Samasource employs some 2,000 workers. Many live in communities with no internet connectivity. So the company put up its workforce in four internet-equipped hotels that were vacant amid the coronavirus-driven collapse of tourism.

  • Over half the company’s workforce in East Africa agreed to the arrangement. Employees each get a suite where they must remain throughout the workday. Housekeepers handle their laundry and nurses check their temperature daily.
  • Wired profiled data-labeler Mary Akol (pictured in one of the photos above), one of 140 employees staying at the four-star Ole Sereni hotel, which overlooks Nairobi National Park.
  • Workers there are allowed to leave their rooms at sunset to watch wildlife like rhinos, zebras, and giraffes from a terrace. They also engage in socially distanced group exercise. Akol has been teaching her co-workers salsa dancing — sans partners, of course.

Behind the news: Several companies are providing jobs that help feed both the AI industry’s hunger for data and underserved communities.

  • U.S.- and India-based iMerit has an all-female center in Kolkata that employs nearly 500 Muslim women who label computer vision data for companies like eBay, Microsoft, and TripAdvisor.
  • Based in New York, Daivergent hires people on the autism spectrum to label data and helps neurodivergent people find tech jobs.

Why it matters: Socially conscious outsourcing increases the tech industry’s talent pool by providing decent jobs to people who, because of geography, gender, race, or other factors, otherwise might be locked out.

We’re thinking: The grocery industry’s Fair Trade labels help consumers distinguish between socially responsible employers and their wage-slashing competitors. A similar measure for AI would foster both growth and diversity.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox