Graphs related to double descent
Stanford University

Moderating the ML Roller Coaster

Wait a minute — we added training data, and our model’s performance got worse?! New research offers a way to avoid so-called double descent.
Screen capture of online conference called Covid-19 and AI
Stanford University

Online Conference Goes Antiviral

AI experts convened to discuss how to combat the coronavirus crisis. An online conference hosted by Stanford University’s Institute for Human-Centered AI explored how machine learning is being deployed to address this pandemic — and prepare for the next one.
Chatbot asking for Covid-19 symptoms
Stanford University

Chatbots Disagree on Covid-19

Chatbots designed to recognize Covid-19 symptoms dispense alarmingly inconsistent recommendations. Given the same symptoms, eight high-profile medical bots responded with divergent, often conflicting advice.
Women in AI in academia and industry chart
Stanford University

AI’s Gender Imbalance

Women continue to be severely underrepresented in AI. A meta-analysis of research conducted by Synced Review for Women’s History Month found that female participation in various aspects of AI typically hovers between 10 and 20 percent.
Information and images related to 6D-Pose Anchor-based Category-level Keypoint-tracker (6-PACK)
Stanford University

Deep Learning for Object Tracking

AI is good at tracking objects in two dimensions. A new model processes video from a camera with a depth sensor to predict how objects move through space.
Results of a technique that interprets reflected light to reveal objects outside the line of sight
Stanford University

Periscope Vision

Wouldn’t it be great to see around corners? Deep learning researchers are working on it. Researchers developed deep-inverse correlography, a technique that interprets reflected light to reveal objects outside the line of sight.
Information related to the kNN-LM algorithm
Stanford University

Helpful Neighbors

School teachers may not like to hear this, but sometimes you get the best answer by peeking at your neighbor’s paper. A new language model framework peeks at the training data for context when making a prediction.
ImageNet face recognition labels on a picture
Stanford University

ImageNet Gets a Makeover

Computer scientists are struggling to purge bias from one of AI’s most important datasets. ImageNet’s 14 million photos are a go-to collection for training computer-vision systems, yet their descriptive labels have been rife with derogatory and stereotyped attitudes toward race, gender, and sex.
Excerpt from 2019 Artificial Intelligence Index
Stanford University

Tracking AI’s Global Growth

Which countries are ahead in AI? Many, in one way or another, and not always the ones you might expect. The Stanford Institute for Human-Centered Artificial Intelligence published its 2019 Artificial Intelligence Index, detailing when, where, and how AI is on the rise.
Chelsea Finn
Stanford University

Chelsea Finn: Robots That Generalize

Many people in the AI community focus on achieving flashy results, like building an agent that can win at Go or Jeopardy. This kind of work is impressive in terms of complexity.
Information related to Implicit Reinforcement without Interaction at Scale (IRIS)
Stanford University

Different Skills From Different Demos

Reinforcement learning trains models by trial and error. In batch reinforcement learning (BRL), models learn by observing many demonstrations by a variety of actors. But what if one doctor is handier with a scalpel while another excels at suturing?
Information related to Bias-Resilient Neural Network (BR-Net)
Stanford University

Bias Fighter

Sophisticated models trained on biased data can learn discriminatory patterns, which leads to skewed decisions. A new solution aims to prevent neural networks from making decisions based on common biases.
Robot cooking, controlled by a person
Stanford University

Robotic Control, Easy as Apple Pie

Robots designed to assist people with disabilities have become more capable, but they’ve also become harder to control. New research offers a way to operate such complex mechanical systems more intuitively.
Process of labeling doctors' notes
Stanford University

Cracking Open Doctors’ Notes

Weak supervision is the practice of assigning likely labels to unlabeled data using a variety of simple labeling functions. Then supervised methods can be used on top of the now-labeled data.
Schematic of the architecture used in experiments related to systematic reasoning in deep reinforcement learning
Stanford University

How Neural Networks Generalize

Humans understand the world by abstraction: If you grasp the concept of grabbing a stick, then you’ll also comprehend grabbing a ball. New work explores deep learning agents’ ability to do the same thing — an important aspect of their ability to generalize.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox