Stanford University

46 Posts

Participant responses (Likert-scale) to post-survey questions about belief about OpenAI's Codex
Stanford University

Generated Code Generates Overconfident Coders: Copilot AI tool encourages programmers to write buggy code.

Tools that automatically write computer code may make their human users overconfident that the programs are bug-free. Stanford University researchers found that programmers who used OpenAI’s Codex, a model that generates computer code, were more likely...
Douwe Kiela with a l
Stanford University

Douwe Kiela: Less Hype, More Caution

This year we really started to see the mainstreaming of AI. Systems like Stable Diffusion and ChatGPT captured the public imagination to an extent we haven’t seen before in our field.
Ground truth video of a road on the left and predicted video with MaskViT on the right
Stanford University

Seeing What Comes Next: Transformers predict future video frames.

If a robot can predict what it’s likely to see next, it may have a better basis for choosing an appropriate action — but it has to predict quickly. Transformers, for all their utility in computer vision, aren’t well suited to this because of their steep computational and memory requirements...
Animated chart shows how AI can avoid mistaking an image's subject for its context.
Stanford University

Taming Spurious Correlations: New Technique Helps AI Avoid Classification Mistakes

When a neural network learns image labels, it may confuse a background item for the labeled object. New research avoids such mistakes.
Animated charts show how scientists misuse machine learning.
Stanford University

Bad Machine Learning Makes Bad Science: Is Machine Learning Driving a Scientific Reproducibility Crisis?

A recent workshop highlighted the impact of poorly designed AI models in medicine, security, software engineering, and other disciplines.
Man sitting on a tree with a monkey and a gorilla
Stanford University

Decision Trees: From Root to Leaves — Decision Trees for Machine Learning Explained

What kind of beast was Aristotle? The philosopher's follower Porphyry, who lived in Syria during the third century, came up with a logical way to answer the question...
Excerpts from the fifth annual AI Index from Stanford University’s Institute for Human-Centered AI
Stanford University

AI: A Progress Report: Stanford University's fifth annual AI Report for 2022.

A new study showcases AI’s growing importance worldwide. What’s new: The fifth annual AI Index from Stanford University’s Institute for Human-Centered AI documents rises in funding, regulation, and performance.
Illustration of a woman riding a sled
Stanford University

Multimodal AI Takes Off: Multimodal Models, such as CLIP and DALL-E, are taking over AI.

While models like GPT-3 and EfficientNet, which work on text and images respectively, are responsible for some of deep learning’s highest-profile successes, approaches that find relationships between text and images made impressive
Halloween family portrait showing the inheritance of some spooky characteristics
Stanford University

New Models Inherit Old Flaws: AI Models May Inherit Flaws From Previous Systems

Is AI becoming inbred? The fear: The best models increasingly are fine-tuned versions of a small number of so-called foundation models that were pretrained on immense quantities of data scraped from the web.
A new framework that helps models “unlearn” information selectively and incrementally
Stanford University

Deep Unlearning: AI Researchers Teach Models to Unlearn Data

Privacy advocates want deep learning systems to forget what they’ve learned. What’s new: Researchers are seeking ways to remove the influence of particular training examples, such as an individual’s personal information, from a trained model without affecting its performance, Wired reported.
Series of images showing some of the findings of the new study by researchers at Stanford’s Human AI Institute
Stanford University

Weak Foundations Make Weak Models: Foundation AI Models Pass Flaws to Fine-Tuned Variants

A new study examines a major strain of recent research: huge models pretrained on immense quantities of uncurated, unlabeled data and then fine-tuned on a smaller, curated corpus.
Graphs, images and data related to the activation function known as ReLU
Stanford University

Upgrade for ReLU

The activation function known as ReLU builds complex nonlinear functions across layers of a neural network, making functions that outline flat faces and sharp edges. But how much of the world breaks down into perfect polyhedra?
Examples of image generators using GANsformer
Stanford University

Attention for Image Generation

Attention quantifies how each part of one input affects the various parts of another. Researchers added a step that reverses this comparison to produce more convincing images.
Selected data from AI Index, an annual report from Stanford University
Stanford University

AI for Business Is Booming

Commercial AI research and deployments are on the rise, a new study highlights. The latest edition of the AI Index, an annual report from Stanford University, documents key trends in the field including the growing importance of private industry and the erosion of U.S. dominance in research.
Graphs and data related to ImageNet performance
Stanford University

ImageNet Performance: No Panacea

It’s commonly assumed that models pretrained to achieve high performance on ImageNet will perform better on other visual tasks after fine-tuning. But is it always true? A new study reached surprising conclusions.

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox