Stanford University

49 Posts

Three Methods for Detecting Generated Text: Techniques to tell when you're reading AI-generated text
Stanford University

Three Methods for Detecting Generated Text: Techniques to tell when you're reading AI-generated text

How can you tell when you’re reading machine-generated text? Three recent papers proposed solutions: Watermarking, classification, and a statistical method.
AI Trends Tracked: 2023's AI trends from Stanford's AI Index
Stanford University

AI Trends Tracked: 2023's AI trends from Stanford's AI Index

Stanford’s sixth annual AI Index takes stock of a rapidly growing field. The sprawling, 386-page report from the Institute for Human-Centered AI presents the past year’s developments in AI based on a wide variety of sources including benchmarks, papers, market research, job listings, and polls.
Graph with difference in test error in keeping hard versus easy examples
Stanford University

Unsupervised Data Pruning: New method removes useless machine learning data.

Large datasets often contain overly similar examples that consume training cycles without contributing to learning. A new paper identifies similar training examples, even if they’re not labeled.
Participant responses (Likert-scale) to post-survey questions about belief about OpenAI's Codex
Stanford University

Generated Code Generates Overconfident Coders: Copilot AI tool encourages programmers to write buggy code.

Tools that automatically write computer code may make their human users overconfident that the programs are bug-free. Stanford University researchers found that programmers who used OpenAI’s Codex, a model that generates computer code, were more likely...
Douwe Kiela with a l
Stanford University

Douwe Kiela: Natural language processing researcher Douwe Kiela calls for less hype, more caution.

This year we really started to see the mainstreaming of AI. Systems like Stable Diffusion and ChatGPT captured the public imagination to an extent we haven’t seen before in our field.
Ground truth video of a road on the left and predicted video with MaskViT on the right
Stanford University

Seeing What Comes Next: Transformers predict future video frames.

If a robot can predict what it’s likely to see next, it may have a better basis for choosing an appropriate action — but it has to predict quickly. Transformers, for all their utility in computer vision, aren’t well suited to this because of their steep computational and memory requirements...
Animated chart shows how AI can avoid mistaking an image's subject for its context.
Stanford University

Taming Spurious Correlations: New Technique Helps AI Avoid Classification Mistakes

When a neural network learns image labels, it may confuse a background item for the labeled object. New research avoids such mistakes.
Animated charts show how scientists misuse machine learning.
Stanford University

Bad Machine Learning Makes Bad Science: Is Machine Learning Driving a Scientific Reproducibility Crisis?

A recent workshop highlighted the impact of poorly designed AI models in medicine, security, software engineering, and other disciplines.
Man sitting on a tree with a monkey and a gorilla
Stanford University

Decision Trees: From Root to Leaves — Decision Trees for Machine Learning Explained

What kind of beast was Aristotle? The philosopher's follower Porphyry, who lived in Syria during the third century, came up with a logical way to answer the question...
Excerpts from the fifth annual AI Index from Stanford University’s Institute for Human-Centered AI
Stanford University

AI Progress Report: Stanford University's fifth annual AI Report for 2022

A new study showcases AI’s growing importance worldwide. What’s new: The fifth annual AI Index from Stanford University’s Institute for Human-Centered AI documents rises in funding, regulation, and performance.
Illustration of a woman riding a sled
Stanford University

Multimodal AI Takes Off: Multimodal Models, such as CLIP and DALL·E, are taking over AI.

While models like GPT-3 and EfficientNet, which work on text and images respectively, are responsible for some of deep learning’s highest-profile successes, approaches that find relationships between text and images made impressive
Halloween family portrait showing the inheritance of some spooky characteristics
Stanford University

New Models Inherit Old Flaws: AI Models May Inherit Flaws From Previous Systems

Is AI becoming inbred? The fear: The best models increasingly are fine-tuned versions of a small number of so-called foundation models that were pretrained on immense quantities of data scraped from the web.
A new framework that helps models “unlearn” information selectively and incrementally
Stanford University

Deep Unlearning: AI Researchers Teach Models to Unlearn Data

Privacy advocates want deep learning systems to forget what they’ve learned. What’s new: Researchers are seeking ways to remove the influence of particular training examples, such as an individual’s personal information, from a trained model without affecting its performance, Wired reported.
Series of images showing some of the findings of the new study by researchers at Stanford’s Human AI Institute
Stanford University

Weak Foundations Make Weak Models: Foundation AI Models Pass Flaws to Fine-Tuned Variants

A new study examines a major strain of recent research: huge models pretrained on immense quantities of uncurated, unlabeled data and then fine-tuned on a smaller, curated corpus.
Graphs, images and data related to the activation function known as ReLU
Stanford University

Upgrade for ReLU: The sin(x) activation function is an alternative to ReLU.

The activation function known as ReLU builds complex nonlinear functions across layers of a neural network, making functions that outline flat faces and sharp edges. But how much of the world breaks down into perfect polyhedra?

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox